all that stuff would require some functionality from the engine that doesn't exist in many games, you could simulate it but that requires even more computation. a camera would be impossible from the amount of pixels required is too much to examine in real-time software. This needs some hardware. Distance sensors are pretty limited but can be done easily with tracelines (see PMBs first RACC bot) these also chomp up some cpu.
If you want to mess around with that kind of stuff, try
www.cyberbotics.com (which i believe you've seen) and get a webots trial to messs around with real bots and simulate them using physics, you can get cameras to mess around with but are very small about 100 by 100 pixels at the most (too much and it will be too slow) this program limits practically everything, you can use GPS to get locations and "cheat" at finding objects & current position, but using GPS is not very practical in a real-life situation.
The bots in computer games (im talking about half-life) have this stuff available like all entities, nav-mesh data. visual information is the way to go for real AI but is implausible in a game such as half-life, you might just get one bot at the most with very limited visual capability.
It's easier to examine all visual objects by using engine functions (like in half-life such as PVS) and field of vision to filter all objects from visual ones, and use techniques to decide which are interesting. But theres the problem of how to simulate light data, a camera would be ideal, if you could translate world positions to a screen position and examine the camera at certain position to find objects of interest, it would be a way of incorporating some of real input, but it will need some superficial input to work.
I think you can translate world to screen in half-life client, I don't know how to get colour information from a position in the screen !