Page:Advanced Automation for Space Missions.djvu/285

 Sensor configuration. Sensing equipment on board includes the usual navigational receiver which ties into the high-accuracy transponder network; a two-axis level sensor so the robot knows its tipping angle with respect to the local gravity field; a detachable grading sensor which rolls along the ground just in front of the precision grading blade and provides immediate real-time feedback to permit exact control of grading angle, pitch, and slew.

The most complex sensor system is the remote camera arm. (See discussion of state-of-the-art techniques by Agin, 1979.) The camera is binocular to allow ranging and depth perception, and to provide a spare in case one camera "eye" fails. This is mounted on a long robot arm which can be directed to observe any part of itself or to survey the landscape during roving activity. The camera arm will need at least seven degrees of freedom - rotation of the arm shaft, flexure of the two intermediate joints, bending at the wrist, camera rotation, lens rotation for focus, and telephoto capability for close scrutiny of interesting features in the environment.

The mining robot camera arm is absolutely essential if the vehicle is to function in the versatile manner envisioned for it. It is not enough simply to know position in space, because the environment in which the system must operate is highly complex. It might be possible for the seed computer to give the robot a "road map" to 1m accuracy, but this would not allow for proper navigation once the miners begin to physically alter their surroundings by digging, hauling, dozing, etc. Also, there may be objects smaller than 1 m that could cause major difficulties such as crevasses and boulders. Hence, it seems necessary to give the mining robots a true generalized "intelligent" roving capability.

Automation and AI requirements. The camera arm will require some high-level AI that lies beyond state-of-the-art. The onboard computer must keep track of the position of the moving arm in order to know where the camera is at all times. There must be routines for avoiding obstacles - for instance, the system should avoid hitting the camera with the loading bucket. Complex pattern recognition routines must be available to permit image focusing, telephoto operation, interpretation of shadows and shapes, differentiation between protrusions and depressions in the surface, and intelligent evaluation of potential risks and hazards of various pathways and courses of action. The onboard computer must have an accurate representation of its own structure stored in memory, so that the camera may quickly be directed to any desired location to inspect for damage, perform troubleshooting functions, or monitor tasks in progress. Finally, the computer must have diagnostic routines for the camera system, in case something simple goes wrong that can easily be corrected in situ without calling for outside assistance.

According to Carrier (1979) the automatic haulers can easily be designed to operate in an automatic mode, requiring only occasional reprogramming but substantially more advanced AI pattern recognition systems. (In 1980 a child's toy was marketed which can be programmed to follow simple paths (Ciarcia, 1981; "Toy Robots," 1980).) Carrier suggests that since there are so many variables associated with excavation "it is doubtful that the front-end loader could operate automatically," though the team disputes this conclusion. In addition to sophisticated pattern recognition and vision systems (Williams et al., 1979), the robot miners need a "bulldozer operator" expert system of the kind under development at SRI for other applications (Hart, 1975, and personal communication, 1980). Such an expert system would embody the knowledge and skills of a human excavator and could substitute for human control in most circumstances. In addition, expert systems might be executed remotely by a process called "autonomous teleoperation." In this mode of operation, mining robots can be remote-controlled via transponder network links by the master LMF computer, thus reducing onboard computer complexity.

Additionally, the onboard computer must handle such comparatively mundane chores as clocking, operating drive trains on the wheels, turning controls, blade angle control and configuration, task completion testing and verification, guidance and navigation, and internal diagnostics. An executive program is also required, capable of accepting new orders from the central LMF computer (e.g., "rescue machine X at position Y") and semiautonomously calculating how best to execute them (Sacerdoti, 1980).

Computation and information requirements. A first-cut estimate of the computational capacity required on board reveals that three major computer subsystems are involved: (1) robot camera arm (seven degrees of freedom, binocular vision, rangefinding, sophisticated AI such as pattern recognition and inference); (2) excavator expert system (controls physical operations, understands a world model, has expectations about outcomes, and can troubleshoot simple problems); and (3) high-level executive system (reprogrammability, interpretation, and "common sense" reasoning). Each of these subsystems represents a different problem and must be separately analyzed.

The robot system with mobile camera studied by Agin (1979) engaged in very primitive pattern recognition. This included insertion of bolts into holes, positioning a movable table relative to a fixed camera, velocity tracking (a Unimate PUMA arm, camera in hand, follows an object moving past on a conveyor belt), spot welding on a moving assembly line, and following a curved path in three dimensions at constant velocity (simulating industrial activities such as gluing, sealing, and seam-following). Again's visual recognition routines ran on a PDP-11/40 minicomputer, a 28K application, and the PUMA robot arm was controlled by the usual LSA-11 microcomputer which has a 16K capacity using 32-bit words. The visual system for the proposed