More new humanoid applications of robots

“Honda has marched to the forefront of robotics with the world’s first
two-legged robot capable of dynamic walking. Honda is designing ASIMO [Advanced
Step in Innovative MObility] to play various challenging roles
in our society. Extraordinary hurdles in artificial intelligence have already been
overcome to expand ASIMO’s capabilities. Now ASIMO is able to recognize human
gestures and postures, as well as ambient surroundings, and even has the ability
to take independent action based on self-acquired information,” according to
a statement made by Honda.

The Honda presentation goes on to say that, “ASIMO sees you and remembers your
face. It maneuvers adroitly around people and objects. The robot also hears you
and responds obediently; and when ASIMO talks, you’re hearing a friendly voice
that has captured hearts around the world.”

People keep finding new uses for robots. Think about how robots solder tiny wires
to semiconductor chips and “pick and place” robots insert integrated circuits
onto printed circuit boards used in all kinds of electronics. Robots also make and
package drugs, textiles, and foods. Certain dangerous jobs are best done by robots.
Guided remotely using video cameras, there are mini-androids that investigate and
defuse bombs.

In fact, there are Mini-Andros that are used by bomb squads across the country
to locate and dispose of bombs. About three feet long, the Mini-Andros look
something like small armored tanks with eight wheels on four “legs” that
extend for climbing stairs. Their movable arms can lift objects weighing up to 15
pounds and place them in bomb-proof boxes. Detachable accessories let the Mini-Andros
break windows, see in the dark, and defuse or detonate bombs directly, either by
blasting them with water, firing at them with a shotgun, or placing other smaller
bombs nearby.

Robots can also go into dangerously polluted environments, like chemical spills
or radioactive “hot zones” in nuclear power plants. Some spider-like robots
are designed to explore areas with extreme radiation that would kill a human. The
need for such a robot was made clear during the Chernobyl nuclear reactor accident
in 1986. An explosion and fire released dangerous radioactive material into the
air which made rescue and containment work nearly impossible.

Some robots “see” using ultrasonic sound, much the same way bats do. These
robots typically emit 40 kilohertz sound waves (too high for humans to hear), then
detect the echoes. Measuring the time delay from when the sound pulse goes out and
then when it returns gives an accurate measure of the distance to an object and
this navigation technique even functions in the dark. Touch sensors also help otherwise
blind robots navigate. Feelers, contact switches and bump sensors let a robot know
when it has made contact with walls or objects. Piezoelectric material is often
used in touch sensors because such crystals respond to pressure with a small electric
voltage. They can detect vibration, impact, and even heat.

Position sensors make it possible to teach a robot to do something (like spray-paint
a car) by leading it through the motions. Sensors on the robot’s joints save
information about the changing series of positions. The robot “remembers”
this information and repeats the motions exactly. Sensors for radio signals and
electric and magnetic fields are especially useful in robotics. Radio signals let
robots communicate with each other from distances. Robotic lawnmowers use electromagnetic
sensors to stay within the bounds of the yard.

Smell and taste in robots are not yet as refined as that of humans, nor do they
need to be. Robotic sensors can detect specific gases including gases that humans
can not smell. One of the most important uses of smelling robots is in airports,
detecting fumes from explosives hidden in luggage and shoes.

Can robots talk?

Speech recognition systems have come a long way in the last decade. There are already
systems that let you “type” into a computer with your voice and some telephone
menus use speech recognition systems to let you make your selections verbally.

Processing language is complicated, however, thanks to the different accents and
cadences people speak with, and the fact that many words and word fragments sound
alike. Think about such words as “there, their,” and “they’re,”
or how some accents can change the pronunciation of a word entirely. That’s
why—at least for now—speech recognition systems work best when the vocabulary
is limited to a few set commands.

Having a robot “talk back” is much simpler; voice synthesizers that convert
text into speech only need a programmed list of pronunciation rules to speak intelligibly.

The road to universal robots

Research scientist Hans Moravec sees a four-stage evolution towards universal robots—robots
with human-level intelligence flexible enough to do a broad range of tasks. Key
to this evolution is a steady increase in computer power, defined in terms of millions
of instructions per second, or MIPS.

Moravec describes computer intelligence in terms of animal intelligence. For example,
a typical home computer has 1000 MIPS of power, about the brain power equivalent
of an insect. Among Moravec’s predictions, outlined below, is that robots will
achieve human-level intelligence (100,000,000 MIPS) in 2040.

Year: 2010

Processing power: 3,000 MIPS

Intelligence equivalent: lizard

Robots will have basic navigation skills and could be used for cleaning or delivery
and take on expanded roles in factories.

Year: 2020

Processing power: 100,000 MIPS
Intelligence equivalent: mouse

Robots will be able to learn on the job, adapting their own programs to perform
more successfully. Robots will do the same jobs as before, but more reliably and
flexibly.

Year: 2030

Processing power: 3,000,000 MIPS

Intelligence equivalent: monkey

Robots will demonstrate world modeling: a general understanding of objects and what
they are for, and of living things and how to interact with them. For example, a
robot will “know” what an egg is and know that it must be picked up gently.
Simulations will allow robots to practice and perfect new tasks before attempting
them. Robot servants will be able to “read” the moods of the people around
them.

Year: 2040

Processing power: 100,000,000 MIPS

Intelligence equivalent: human

Robots will be able to speak and understand speech, think creatively, and anticipate
the results of their actions far in advance. With reasoning power at or beyond the
human level, robots will be generally as competent as people.

“Robomenagerie,” vanguard of biomimetics, a strange field in which
scientists reverse-engineer nature’s greatest accomplishments

The idea is to copy Mother Nature’s tricks—things like a lobster’s
ability to navigate pounding surf or a bat’s sonar that allows it to find mosquitoes
in the dark. There is a trend for engineers and researchers to move in the direction
of microrobotics; in other words, the idea is to use many little robots to do the
work of one big robot or even a human. Military agencies like the idea of sending
robobeasts to do things far too dangerouis for humans; such tasks as, clearing land
mines or inspecting nuclear reactors in submarines.

The best example of the microrobotic trend is seen in NASA, which has embraced the
“smaller, faster, cheaper” philosophy of sending lots of little space
probes to do the work of one big space probe.

Some examples of microrobotic creatures are the following: robofly, robolobster
(being built at Northeastern University), and robopike (which swims in a tank at
the Massachusetts Institute of Technology). Robofly is envisioned as having a body
made of paper-thin stainless steel and its wings of Mylar, which looks and feels
a lot like Saran Wrap. Robofly is scheduled to be powered by the sun, and a tiny
device called a piezoelectric actuator that will flap its four puny wings. One of
robofly’s missions will be to fly through the rubble of an earthquake searching
for survivors or even being a spy to keep tabs on terrorists or wandering spouses.

“Robotics,” another word that enriches our language

The term “robotics” refers to the study and use of robots. The term was
coined and first used by the Russian-born American scientist and writer Isaac Asimov
(born January 2, 1920, died Aprril 6, 1992). Asimov wrote prodigiously on a wide
variety of subjects. He was best known for his many works of science fiction. The
most famous include I Robot, (1950); The Foundation Trilogy, (1951-52);
Foundation’s Edge, (1982); and The Gods Themselves, (1972).

Asimov first used the word robotics in Runaround, a short story published
in 1942. I, Robot, a collection of several of these stories, was published
in 1950. Asimov also proposed his three “Laws of Robotics”, and he later
added a “zeroth law”.

Law Zero: A robot may not injure humanity, or, through inaction, allow humanity
to come to harm.

Law One: A robot may not injure a human being, or, through inaction, allow a human
being to come to harm, unless this would violate a higher order law.

Law Two: A robot must obey orders given it by human beings, except where such orders
would conflict with a higher order law.

Law Three: A robot must protect its own existence as long as such protection does
not conflict with a higher order law.

Robotics is now defined as a branch of engineering that involves the conception,
design, manufacture, and operation of robots. This field overlaps with electronics,
computer science, artificial intelligence, mechatronics, nanotechnology, and bioengineering.

Back to part 2 of robots Take me to part 4 of robots
Scroll to Top