Pleo's camera and hardware have all the capability to do what you want . . . But not running the LifeOS. What you most likely need to do is create a new OS (possibly using pieces from the old OS). With a new OS, you can drop all of the "emotional" baggage, ignore all the sensor input, and just concentrate on processing visual input and IR sensor, and then use the pre-programmed movements.
You will need a very well lit maze with lots of room for pleo to maneuver. Use high contrast between wall, floor, and egg (i.e., white floor, black walls, bright green egg).
Then just move the head to pan the camera left . . . Egg detected?
Yes - Attack egg. Finished.
No - Is wall detected?
No - walk to the left one space. Start sequence over again.
Yes - Move head to pan camera straight ahead . . . Egg detected?
Yes - Attack egg. Finished.
No - Is wall detected?
No - walk ahead one space. Start sequence over agian
Yes - Move head to pan camera to the right . . . Egg detected?
Yes - Attack egg. Finished.
No - Is wall detected?
No -- Walk right one space. Start sequence over again.
Yes - dead end. Rotate body 180 degrees. Start sequence over again.
This is basic logic for what I assume you are trying to do (brute force to find way through maze, not "solve" or "learn" the maze). You could instead do a full visual sweep left to right to see if the egg is in sight and where the walls are first, process the logic to make the decision, then walk as determined by the logic.
The head movement and walking left/straight/right, and turning around can all be done using pre-developed movements from the PDK. The logic is pretty simple to program. So all you really need to work on is processing the visual and IR signals to detect the walls/egg.