Building a digital twin of the physical world – Highlights from the UnixWorld Challenge
Nokia Bell Labs just held a two-day conference at its Murray Hill, NJ location to celebrate the 50th anniversary of Unix’s birth in 1969. Rather than simply dwell on the past, the conference blended retrospective presentations by luminaries from that era with speculation from a variety of speakers about emerging technologies in systems software, networking and robotics.
In keeping with its forward-looking facet, parts of the conference program were student-centric, including a three-day robotics programming competition called the UnixWorld Challenge. A dozen graduate students in computer-related fields from schools across the country were invited to participate. Experience in robotics was not a prerequisite, but all contestants were already proficient in Python programming. On arrival, they were assigned to four color-coded teams of three. Each team was given a pair of technical 'coaches.’ Other technical staff served as impartial 'referees' for the competitive trials, while yet more maintained the needed hardware, software and networks.
Murray Hill's robotics team had prepared the physical environment, a set of wheeled robots, and a networked software infrastructure with which the students were to work. This was more than a 'toy' set-up, in that it will continue to be used for robotics projects at Murray Hill.
The environment consisted of a series of small, inter-connected 'rooms,' with both exterior and interior walls about waist-high. Each room was populated with objects typical of an industrial environment: a stack of boxes in a receiving area, a robotic arm in a manufacturing area, etc. In keeping with the theme of the conference, the interior walls spelled the four letters UNIX when viewed from above.
The wheeled robots were customized in-house from an off-the-shelf base models. In addition to drive mechanisms, they had wireless connectivity (WiFi, Bluetooth and LTE), LIDAR and cameras. All the control software running on board the robots was provided and fixed, based on ROS atop Ubuntu. The students programmed a client machine to access a robot through a Python API networked to each robot.
The software framework provided more than primitive robot control: it combined physical geometry and LIDAR readings to yield location information; it supplied path planning to drive a robot to target coordinates; and it recognized objects from a captured image, based on prior machine-learning training. The implementation of these services was distributed across a local network.
The students' programming challenge was to create Python code to run against the API provided to access the software framework. Their code needed to direct a robot to explore the rooms, locating and identifying objects. The crucial scenario was: move the robot to target coordinates, rotate the robot to direct the camera, capture an image, determine the presence and nature of an object, and if present, infer the location of that object. That location inference depended on the object-recognition service to identify the detected object within the captured image.
Over the course of the competition, the challenge was presented as a sequence of incrementally more difficult tasks, each requiring more services from the API and more programming from the students. The first task was to get a robot to simply move; the final task was to traverse all four rooms under time constraints, identifying as many arbitrarily-placed objects as possible. Against the given API, the final task typically required students to code about 1,000 lines in Python.
Most of the students' programming and debugging was performed against a simulator provided in the software framework, with periodic tests against a real robot. As might be expected, such simulation yields huge productivity gains over running every test in a physical setting. These gains are always somewhat tempered by the frustration of discovering a simulator's infidelities – there are just always more things that go wrong in the real world.
Testing of the students' code against the final task was 'hands-off': a referee initiated their code while students became passive observers of the results. Competitively scored points were awarded, weighting for both accuracy and speed. Penalties were assessed if a robot required manual assistance, which two of the four teams managed to avoid.
Following the final tests, the students were left in a combined state of exhilaration and exhaustion. All the teams succeeded in getting their robots to successfully traverse the rooms; the distinction between them was in their performance on object detection. The winning team, Yellow, scored more points by emphasizing accuracy over speed, essentially capturing and evaluating more images to improve the image-recognition yield, for which the scoring protocol rewarded them. As is often the case, thoroughness beat out raw speed.
Written by Tom Cargill
About Tom Cargill
Tom Cargill is a recognized authority in the field of object-oriented programming. After receiving his Ph.D. in 1979, Tom remained at the University of Waterloo as an Assistant Professor of Computer Science until 1982 when he joined the Computing Science Research Center of AT&T Bell Laboratories, Murray Hill, NJ. His research there involved investigation of debuggers, particularly their user interfaces, which motivated him to start using C++ when the language was called C 84. He had some influence on the language in its early years and has continued to use it, teach it, and follow its growth.
In the Fall of 1988, he moved to AT&T in Denver, Colorado, working mostly on debuggers written in C++. Since 1989 Tom has been an independent consultant based in Boulder, Colorado, providing consulting services for a wide variety of companies. http://www.profcon.com/