As far as 2) goes, I suspect that if you tormented enough biological creatures in this manner, you would see some of them 'paddling' in the hopes of touching the ground again as well. Please don't conduct that experiment, though... or at least don't name me as co-researcher!

In terms of the general question you've raised, I think the important thing to remember is that things that are very easy for humans, things so natural to us we might not even think about them, are very difficult for the computers we create - and vice versa. A quote I read recently went something like, 'We call things artificial intelligence because we still can't do them yet, once we can we give them a different name.' We can do obstacle detection, we can get a robot to be aware of when it is 'on the ground' or not... but Pleo itself is still an artificial intelligence and can't do all those things at once perfectly. In fact we can't even match human ability in many things we
have named. If you look into some of the problems experienced (and sometimes eventually overcome) by other famous robots, such as ASIMO and the many puzzle-solving robots at universities around the world, you realise just how hard these things are to implement.
By which I don't mean to suggest you should just ignore what you identify as shortcomings in Pleo's programming. I think there are two different ways to react to a Pleo though, and you can do both at different times despite them seeming to contradict themselves. I've played with my Pleo enough now to see where her limits are, and I've done some research on how she works and am thinking about how I can take advantage of that when I'm in my 'developer mode'. When I'm just enjoying her as a 'robotic pet' of sorts, I play along more with what she's able to do so that my actions don't break that illusion. It seems quite natural to me since this is pretty much how we behave with people who may have a tendency to act illogically. I know a lot of people who are very poor at time management and will take on new commitments that are going to interfere with their ability to fulfill the existing ones. You can try to tell these people they are putting themselves in an impossible situation, but eventually you realise there is something about their internal 'programming' that genuinely stops them from being able to behave any differently, and the best you can do is behave so that their tendencies don't inconvenience you.

There's actually a warning/disclaimer thing about the sort of behaviour you've identified in the booklet that comes with a new in-box Pleo rb. It says something about how Pleo won't be able to respond to your trying to get its attention if it's doing something else, like walking somewhere. So presumably there are a number of situations in which Pleo isn't 'paying attention' to some of its sensors and you could demonstrate this by carefully planning your own behaviour. More experiments, perhaps?