Cybersecurity in robotics often boils down to: Should we be worried about autonomous cars being hacked? The answer to that is simple: Hell, yes! And it is not just autonomous cars that we should be worried about; remember the Iranian SCADA-controlled robotic centrifuges that were enriching uranium and then, thanks to the stuxnet worm, suddenly spun themselves to pieces? Robots present many vectors for attack and objectives for the attack. The sensors can be hacked, either to see what the robot is seeing, to substitute different readings (like the bus camera video loop in Speed), or to distort or poison the sensor readings (which should then cause the robot to behave badly or misidentify objects)- a great example is the glitter spray in Almost Human. GPS spoofing is a famous example where robots suddenly navigate to the wrong location. Another objective is to directly take control of the robot by taking over either the onboard control software or the software on the operator control unit. Signal jamming is another objective— cause the robot to do bad things or have to fail to a safe state (like a drone would return home) because it has lost communications (if it the software is distributed onboard and offboard). Software updates can introduce bugs (which fortunately gave Murderbot its autonomy to be delightfully snarky, though the idea that robots could use bugs to self-modify its code violates bounded rationality.
Bad software engineering, bugs: Westworld movie, The Murderbot Diaries (All Systems Red, Artificial Condition, Rogue Protocol, Exit Condition)
Hacking in general: Almost Human, Level 5
For further reading:
G. W. Clark, M. V. Doran and T. R. Andel, "Cybersecurity issues in robotics," 2017 IEEE Conference on Cognitive and Computational Aspects of Situation Management (CogSIMA), Savannah, GA, 2017, pp. 1-5.