This thesis points out that the goal of building the intelligent robot is to construct an intelligent agent, who has high degree of autonomy, situatedness and learning ability. Autonomy, situatedness and learning ability are essential measurements of intelligent robots. When constructing the intelligent robot, traditional Artificial Intelligence (AI~ manually builds the world model and represents the world with symbols. Then the robot could achieve his goals by planning and reasoning with the symbols. However, the traditional intelligent robot could hardly work well in real dynamic environment, because of the complexity and time consuming of building world model. As a researcher in the new generation of AI scientists, Brooks believes that the intelligence could be achieved through the interaction between the robot and the environment. He thinks it unnecessary to establish the symbol model of the world. He argues that the robot just responses to the situation, which has no explicit world model and reasoning procedure, and could complete his task perfectly. Brooks' s machine quickly navigates in the dynamic environment, although it can not achieve complex goals. Different from the opinions that researchers held, the thesis argue that the world is the one, which was sensed by robots. Robots can acquire the knowledge of the world with the sensors and the actuators and use symbols to represent the knowledge. In contrast to traditional AI, the symbol world is not built by people, but by robots themselves. Therefore, robots could use the symbols, which are called robot language, to plan, reason and achieve high-level intelligence. The thesis describes the steps of building intelligent robots: First, the robot need to classify the signals of the sensor; Then the robot manages to learn the proper reactions in particular situation and the logical reasoning of robot language. Because robot learning could improve the degree of robots' autonomy and situatedness, the thesis introduces two learning strategies into robot learning and tests the algorithms in a simulated robot. Using reinforcement learning, the simulated robot manages to learn the correct responses in specific situations: avoiding obstacles and marching toward the goal. Moreover, the robot can quickly master new actions when the parameters of his actuators vary. The robot adopts the associative learning methodology to acquire simple logic relations in robot language and shape a healthy habit. The simulation experiment uses the interaction between the agent and the application to simulate the interaction between the robot and his environment. The analysis and the result in the experiment validate the learning strategies. After learning, the robot gains more autonomy and situatedness.
修改评论