For many homeowners, the traditional way to protect against burglars has been to ensure doors and windows are locked and perhaps leave a light on.
But such gestures might prove futile in years to come after scientists warned that the next generation of home invaders could be robots that are programmed to gain entry through cat flaps or letter boxes.
Using Artificial Intelligence (AI), small robots are being developed that could breach traditional security safeguards. Delivered through small openings such as cat flaps, they could scan a person's home in order to retrieve keys that could then allow a human burglar to enter.
Alternatively, scientists believe more advanced devices could use AI to search a property themselves for valuables, or cash, using cameras to scan and access different rooms.
The robots could also be used to determine whether anybody is at home, relaying the information to a human operator who could then break in if the coast is clear.
The frightening prospect is just one area in which scientists and police believe AI could be used by criminals to exploit people in the future.
A study, published in 'Crime Science' by researchers at London's UCL, identified a range of criminal opportunities that technological advances could create. While the use of so-called "burglar bots" is regarded as a low-reward crime, scientists and police are concerned about "deepfake" videos and images that could exploit unsuspecting victims.
Using sophisticated software, criminals are able to generate impersonations of people, which could be used to persuade people to part with money or secure passwords.
Police fear unscrupulous criminal gangs could generate a video of someone from material freely available online and use it to persuade their elderly parents to send them money.
The researchers also highlighted the potential risks posed by the roll-out of driverless cars, which they warned could be used by extremists to carry out terror attacks.
Professor Lewis Griffin, from UCL's computer science department, the senior author of the report, said: "As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives."