2008-08-26

Presentation: hacking Industrial Robots



I'll talk about my work on industrial robots. The robots belong to my university - PJIIT.

Here you can see our two Motoman SK6 robots. They were bought by the Japanese government from a fund to support developing countries. The robots were produced in Japan, assembled in Sweden, serviced by Germany and located in Poland.

They are created to do welding and painting in the industry. They have no feedback - they will just do everything to move to the destination point. No matter if there's someone's head in their way. So watch out, they really could kill.

This is how our lab looks like today. Notice the big grey box on the left (and a part of another one on the right). It's the robots controller which is responsible for moving the robot. All software for the robots run on these controllers.

On this slide you can see how the robot welds something. The robot is moving to specific point in the space, it welds, then moves to another point and so on. It's doing the same thing all over again. It's similar with painting robots. They were once programmed to paint a thing and they are just repeating the movement for years. On the right you can see an example of a program for a robot. It's called a job. It's basically just a series of points to which robot shall move and some simple commands to enable or disable the welder or brush.


This is an example job on our robot. It's programmed on the controller, no computer is involved in controlling the robot. We use it to show the visitors how the robot moves.


This is the same job as on the previous slide but in full speed. When I'm watching this job I'm always impressed by the robot possibilities. But maybe you need to be in our lab to have this feeling.

This device is used to write jobs for robot. As you can see there are a lot of buttons on this console. Most of the buttons don't do anything useful. On the other hand there are a lot of cryptic shortcuts that are used quite often (like: press star button with X key).

On the right you can see an example job again. The bold part is what you can see on the console. All the other information is magic and it's hidden from the programmer.

The console has the hardest interface I've ever used. The point is that the console was never intended to be easy to use. As I mentioned before - the job is created once and it runs for years.

The problem is I don't need a repeatable job in my lab. I want the robot to do something dynamic. Grab something from the floor, catch a ball, give me a hand and so on. I don't want a robot to do the same, moves all over again. It's useless for me.

So the question is whether an industrial robot in the lab is useful at all. Most of the robots are useless and can't be controlled in dynamic way nor connected to the computer.

Fortunately our robots have an additional Turbo Function board.

The board is programmable and can be connected to the computer using serial link. The board has unusual Intel i960 processor.

The serial link isn't perfect too. It's very slow - 19.2 kbps, like '93 modem, and it's only half duplex. The link could be used to send some data from computer to robot.

The docs say that the board has only the basic interface to move the robot. Like go to the point and stop there. But we can't do any kind of dynamic control.

I've mentioned that the Turbo board is programmable. To write a software for it you needed to:
  • create a program at home, in C
  • using a floppy move the source code to our old Windows box, called Smalltalk (the PC on the left side of the picture)
  • compile the code using commercial compiler (tied to the box because of a hardware key)
  • open FC1 software to move the compiled binary to robot
  • turn on the robot, press a lot of keys
  • wait a few minutes to upload the binary to the robot
  • restart the robot, press a lot of keys to execute the binary
It took a lot of time to execute your software on the robot. This slowed down a process of writing software. Like always, software is written with bugs, so for each, even the smallest, change you had to repeat the process.

I thought that this is so time consuming, that I need to make process faster and move the compiler stack to some newer OS, like Linux.

First thing I did, was to create a clone of the FC1 software, so that you could upload your binaries from a Linux box. The communication between FC1 and robot went through a serial link. I opened up the wire, connected some wires to another computer (on the right side of the picture) and sniffed the communication. Then after some reverse-engineering I created simple replacement of FC1 software for the Linux. This was my first achievement.

The next thing I needed to do, was to move the compiler stack to Linux. I decompiled the commercial libraries on Smalltalk, tested various open libc libraries, found an old GCC version that supported this special processor and wrote my own linker scripts. Finally I was able to compile a software on Linux.

On the left side of the slide you can see a page from the documentation. It's in Japanese and yes, I don't speak Japanese either. Most of the docs are in English, but unfortunately the most interesting pars aren't.

Like always the documentation is not perfect. We were able to compile a software for a Turbo board, but we didn't know what was possible on it.

Using a Turbo board we were able to control the robot only in a basic way, move to the point and stop there.

After a lot of work, searching a lot of sources, we discovered something with a magic name "realtime mode". This mode allowed us to order a robot to do large number of smaller moves, but without stopping between them.

This was the thing we were looking for!

On the right you can see our first experiment. The red line shows a position where we ordered a robot to go, the green line shows where the robot really was. As you can see the reaction time (latency) is not small, but it works!


After this discovery we wanted to try how if it works, so we created a software that allowed us to control the robot using a joystick. It was really fun to play with a joystick, but it isn't the simplest interface. (sorry for destroying the target in the video).

This wasn't end of our low level problems. To reduce the latency we needed a faster serial link. 19.2 kbps was too slow for us.

I downloaded the firmware from the Motoman, disassembled it and discovered that the robot could use faster serial speed. The speed was unusual and using an oscilloscope we were able to detect that it was 46.6 kbps. But this is not a generic serial speed, a normal computer can't use it.

This is how our communication looks like today. The robot emits 46.6kbps, than we convert the signal to industrial rs422 format and finally we use custom-built FTDI usb-to-serial converter.

This converter can be set to arbitrary serial speeds, so it can achieve 46.6kbps. In the end we can use it as a normal serial device on the Linux.

After more than 2 years of work we were able to compile software from Linux. We disassembled a lot of robot internals. I wasn't able to fix the half-duplex problem, but I created a software library that is a workaround for this issue.

The biggest achievement was to discover the realtime mode and move the robot dynamically.

It's time to say what we were able to build on the top of this low level software!

This project is one of the oldest. The idea is that to create a cube that a robot can grab from the floor and do something with it.

For example handle it to the second robot. So that one robot can grab cubes and other can build something from them (for example a tower). When a tower crashes, the roles are swapped.

The cubes are located based on the information from the camera. This slide shows my work on the markers for the cubes. Red outline is computer generated.

This is how the marker recognition worked in practice. The numbers 777 and 778 are coded in the red squares. There's also green computer generated number which shows an angle of the cube.

In this video robot moves to the start position. Then, based on the information from camera, the robot is doing smaller and smaller moves toward the cube and finally grabs it.

This is my favorite project. Based on the information from the camera robot is trying to put recognized face in the centre of the screen. The face recognition algorithm is taken from opencv library.

After the experiment with joystick I wondered if it's possible to create better interface for controlling the robot. I discovered the Haptic device, which is basically a 3d pointing device. It's used by multimedia people to edit 3d graphic scenes. I thought it should be perfect to control the robot.

This video shows how the results look like. Playing with haptic connected to robot was a lot of fun.

Have you seen the Wiimote experiments by Johnny Chung Lee? He created 2d pointing device based on Wiimote infrared camera and an infrared diode.

I thought that it would be great to use two Wiimotes to get 3d position of diode and connect it to the robot. I wanted to achieve similar experience as with the Haptic, but using less expensive hardware.

On the right side you can see the two Wiimotes and two experimental infrared pens. Every pen has two infrared diodes, so that we don't only know the position in 3d, but also the angle of the pen.

It's not so easy to create a 3d information from two 2d views. I needed a lot of time to understand and code the mathematics behind it.


Here you can see how it worked, a cheap and quite accurate 3d pointing device.

Recently my friends, Aleksander Górski and Łukasz Hrynakowski, are working on a project, in which they put a SICK laser on the robot. The laser is basically a single infrared beam and a rotating mirror. Laser returns a distance from walls or other things in a plane. The idea is to move a laser from above to the ground and create a semi 3d view of a target thing, like a person.

Some time ago some Google Street view cars were seen in Europe. It seems that Google is using similar SICK lasers.

This is how the scan looks like. You have to hold still for about 30 seconds.

It took few months of work to move a robot from above to the ground on a straight line with constant speed. Robots aren't really created to move on straight lines.

And here you can see the results of scan. It's technically called 2.5d scan, because it's not full 3d, only 3d from one side.

I find the results very interesting. I can't wait to see final results of this project.

Finally a scan of faces of the two authors of the project and mine.

That's it! I hope you learned something from this speech.




One more slide. This is my dream, but our robots are to weak for this kind of things.


2 comments:

robo said...

amazing work Guys! would you share the software you developed ie FC1 clone. I have a similar robot and would like to duplicate the functionality you've created.

robo said...

hello! great work ..
where did you find the documentation for "realtime" mode ?

kind regards