Robots and Real-World Variability

Boston traffic

How long did it take you to get to work today? Me? A 12-mile trip took about 45 minutes. My daughter’s bus was running a little late. My son forgot to pack his backpack last night – what a surprise. Construction workers had taken over my “secret” cut-through. And one of the elevators in my office building was out. Again.

Most days, barring a major accident, I get into the office about when I expect, plus or minus 10 minutes. Could I eliminate the variability in my commute time? Sure. Live at work. But most days, I find ways to navigate the obstacles in my path.

Like my daily commute, there is variation in manufacturing. Raw materials from suppliers may not be exactly the same from batch to batch. Multiple pieces of equipment making the same part are not identical. The performance characteristics of parts they produce vary over time. In manufacturing parlance, these are often called common cause variations. Simply put, the real world is full of variability.

This ever-present variability challenges robots in manufacturing. For years, robots have required that everything around them be bolted down so that the environment in which they work is always controlled and exactly the same. Traditional robots can only pick a part from precisely the same place – every time. And put it back down, precisely in the same way, in the same place – every time. Any variation in the placement of the part or the path along which the robot moves the part and the robot simply stops working.

This inability to deal with variability is, in large part, the reason why as much as 90 percent of manufacturing tasks have not been automated. I recently visited a contract manufacturer in Guadalajara, Mexico, where except for the surface-mount technology machines, the equipment was on wheels. Spiders were going up and down the lines bringing parts and taking away assemblies. Production runs last only a few months or even a few weeks. The variability in these environments severely limits the practical application of traditional robots.

And for large-scale implementations of traditional robotics solutions, the inflexibility has significant implications on the ability of the manufacturer to recover the investment. I’ve walked through plants, where the plant manager can point out what product line a given piece of automation was created for and when the last time it produced anything was. One of the automation guys I know calls these “monuments” – they aren’t much more than historical infrastructure. He has a sign posted in his office, “No Monuments.”

No longer. A new breed of smart, collaborative robots are coming online that approach variability in a different way. Rather than assume a perfect world, which can come at the expense of flexibility and agility, these robots can accommodate the changes and normal fluctuations that exist in most modern manufacturing environments.

Advances in hardware and software are making it possible for robots to work seamlessly, cost-effectively and with little integration time, in semi-structured environments. These robots understand the context of the task being performed and possess the cognitive and mechanical abilities to deliver that task. Like their human counterparts, collaborative robots are trained to do a task rather than be programmed to move an object from point A to point B via path Y. When change in the environment inevitably occurs, focus remains on the task at hand and getting the job done.

In these environments, there are two dimensions for which the robot must be optimized. The first is time. Robots must be able to synchronize motion and task with machines and people through signals or directly with sensors. It’s this ability that makes it possible for the robot to collaborate with people – who work at varying paces, who tackle things differently and who need a “colleague” able to accommodate the unpredictability and variability that people bring to the environment.

The second dimension is space. While parts are expected to be presented in an organized fashion, these robots are able to accommodate a few centimeters of variability in part placement and tolerate changes in general location.

Robots today are able to:

  • Use embedded vision to dynamically monitor workspaces designed for humans and adapt to changes to the work cell, such as a bumped table or misaligned cart on wheels
  • Alter robot motion path planning in real-time to accommodate unexpected obstacles, as demonstrated in the video here
  • Use mechanical compliance to flex parts into position despite irregularities in pick and position placement without damaging the part, the fixture or the robot

These advances mark the beginning of a new era, where robots are able to move beyond assembling the same item for a long time and in volumes large enough to justify the high cost and semi-permanence of the infrastructure. These smarter and more capable robots are working in the real, imperfect and highly variable world and changing manufacturers’ mindsets about where and how automation can deliver real value.

Share your perspectives with me @jim_lawton.

Originally published on Forbes.

Robots and the Human Touch

child-finger-playing-pianoHave you thought much about what touch means to us as humans? Watch carefully a pianist and wonder at the range of sounds created by the seemingly simple touch of fingers on the keys. Observe a craftsman building a piece of furniture and count the times hands meet wood to measure smoothness, shape, and feel. Or see a watchmaker tightening gears and screws at the perfect level of tension to create a masterful timepiece.

Our ability to touch and feel the environment around us is an extraordinary aspect of what makes us human. It also allows us to perform a wide variety of delicate and intricate tasks without harming the people around us or the objects with which we interact.

It may seem odd to write about human touch in a piece on robots and automation, but until now, lacking this ability has sorely limited the use of robots in manufacturing environments. That’s all changing. While robots may never experience the emotions conjured in humans by the feel of a baby’s breath on their neck, they are more and more able to apply the incredible value of touch to tasks.

Consider what it takes to test a printed circuit board (PCB). A worker picks up an untested PCB, moves it through the air and carefully inserts it into a fixture that may have no more than 100-micron clearance. Once the results are known, the PCB is passed on to the next step in the manufacturing process or is set aside to be reworked or scrapped.

Humans are not particularly precise creatures. And yet without thought to what needs to happen, thousands of workers perform this task extraordinarily well every single day. How is this possible?

First, the worker can move the untested PCB through free space, stably and purposefully, into the test fixture. While high precision is not necessary for this step, collisions, erratic movement, and rapid acceleration must all be avoided.

Next, the worker feels the forces being applied by the fixture as the PCB is inserted. They dynamically adjust the stiffness of their arm to securely snap the PCB into place without damaging the PCB or the fixture of the tester. The direction and the forces applied change constantly until the task is achieved.

Rigidity is not an option in this scenario or in many others. This spring-like nature of our limb allows us to use our arms to guide a dance partner or our legs to navigate uneven terrain and myriads of other tasks.

Why is this easy for humans and so hard for robots? Consider the difference between a bouncing ball and a human jumping. The ball hits the ground, forces interact, power is transferred, and the ball bounces back up. Unlike the ball, humans jump through free space, absorb the initial impact of the ground through the springiness in their legs, and then gradually stiffen the leg muscles to stabilize position.

Until now, robots have not been able to master this give-and-take that gives humans the ability to apply just the right amount of pressure to respond and react as needed:

  • Point-to-point, position-controlled robots work based on careful alignment of the object in play. This approach is fine until the object isn’t exactly aligned or something gets in the path between point A and point B. The robot will keep applying force until the object is aligned, removed or more likely, damaged. Avoiding this requires sophisticated vision systems or complex, integration-heavy fixturing. These solutions are costly and very inflexible. The result? Robots are rarely used to perform tasks like PCB test.
  • Alternatively, force-controlled robots interact with objects more gracefully and are better suited to tasks that require the finesse exemplified by the insertion of the PCB into the tester. Again fine, until we talk about moving around in free space. Then the robot becomes dangerous as it will move faster and faster until it finds something to stop it.

Neither option offers that “spring-like” nuance so essential to not damaging limbs or objects.

Today, cheap sensor technology and advances in robot design architecture make it possible to combine mechanical compliance (the ability to mimic the give-and-take of a human arm) and impedance control (dynamically controlling stiffness or springiness as described by the differences between a human jumper and a bouncing ball). As a result, smart, collaborative robots now bring the long-sought-after ability to “touch” and “feel” their way through tasks like humans do. Today, robots can load PCB’s for testing.

It’s hard to imagine our lives without the ability to touch. As manufacturers look to build the factories of the future, where human brain power will be more essential in every corner of the operation, robots able to perform tasks that require the abilities made possible through touch will be a critical asset.

Where do you see robots with human-like touch fitting into your operation? Share your perspectives with me @jim_lawton.

Originally published on Forbes.

Mind + Maker: Manufacturers Rethink Robots

Rethink-Robotics-Baxter.640Say ‘robot’ and images of Rosie, or the Terminator, or R2D2 come to mind. Star Wars meets The Jetsons. Today’s reality, however, looks more like a fleet of industrially colored robots purchased by car makers to weld and assemble vehicles and move heavy metal.

But this reality is changing. Fast. Robots are smarter than ever before, safe enough to work among humans, and can perform physical tasks once considered impossible.

So, what does it mean to have robots among us? Advances in technology are paving the way for a new breed of robot that delivers entirely new ways of performing physical tasks. And when combined with the revolution in big data and advanced AI, these smart, collaborative robots are going to profoundly change how manufacturing gets done.

Breaking the Barriers to Widespread Automation of Physical Tasks

Today, more than 90% of physical tasks performed in manufacturing environments can’t be practically or economically automated. Why? Robots can’t adapt to the real-world variability in the workspace or operate effectively in semi-structured environments. Being agile enough to stop working on one task and shifting to another quickly and without requiring reprogramming extends beyond the capability of most existing automation. The list goes on. But advances in compliant motion control, integrated vision, extensible software platforms and advanced AI are tearing down these obstacles even as you read this post.

Sense, Adjust and Learn: Automating Cognitive Tasks

Much is being written about advances in AI. Some speculate on the arrival of malevolent robots that through conscious volition harm the human race. In truth, the opportunities over the next three, five and ten years are mundane by comparison but also much more practical.

Robots are now able to apply basic common sense to reduce the cognitive load of the user. Consider a robot picking parts out of an egg carton-type grid. If the robot learns that one of the parts has shifted and is now in a different location, it can infer that the other parts have shifted as well and can calculate the new locations without the intervention of a user. Robots are now able to sense the world around them and adapt on the fly to the changes typical in semi-structured, manufacturing environments.

The automation of cognition exponentially increases the scalability of learning across the enterprise – and ultimately inter-enterprise – as well. Cloud robotics leverages cloud computing, storage, and advanced analytics to coordinate the actions of large numbers of robots and allows one robot to benefit from the experiences of others. These advances replace the old model where robots are manually re-programmed one-at-a-time, increasing cost and risk and delaying the benefits of new knowledge.

The bottom line? Robots in the near future will gain knowledge from experience, learn from each other and leverage cognitive computation to make themselves, their processes and the products they produce better.

What does all this mean for manufacturing? Manufacturers recognize that they need to be more responsive to market changes, ready to deliver on customer preferences, and able to innovate faster and more efficiently. All adding up to the overwhelming demand for environments that are agile enough to meet all those needs.

Smart, collaborative robots will engender this agility. Factories will be smaller, and located more closely to markets and design centers, accelerating new product introduction and competitive advantage. Production lots of smaller sizes and mass customization will become economically viable, increasing customer loyalty and reducing risk. Companies will be able to retool their manufacturing systems to provide new roles for these mechanical “workers” as well as new roles for human workers.

The result? Yes, productivity and efficiency improve. Better yet, manufacturers find new ways to ignite creativity and fuel innovation.

I’d welcome your thoughts on what this new frontier in automation will mean to manufacturing and what it will take to seize the opportunity. You can reach me at @jim_lawton.

Originally published on Forbes.