Researchers at Cornell University have developed a way for robots to consider the human element when arranging a room by teaching robots to “hallucinate” where and how humans might interact with the space—such as standing, sitting or working in a room—and place objects according to their usual relationship to those people in the robot’s imagination.
According to the University’s article, “previous work on robotic placement…has relied on modeling relationships between objects. A keyboard goes in front of a monitor, and a mouse goes next to the keyboard. But that doesn’t help if the robot puts the monitor, keyboard and mouse at the back of the desk, facing the wall.” With this unique approach, every object is described in terms of its relationship to a small set of human poses, as opposed to a long list of other objects. “A computer learns these relationships by observing 3-D images of rooms with objects in them, in which it imagines human figures, placing them in practical relationships with objects and furniture. You don’t don’t put a sitting person where there is no chair. You can put a sitting person on top of a bookcase, but there are no objects there for the person to use, so that’s ignored.”