Microsoft tracks programmers' brain waves to reduce software bugs
Microsoft is testing a novel approach to reducing coding errors: tracking the brain waves and eye movements of developers as they work to try and identify when they are struggling to complete a task.
Programmers often work long hours, typing code while staring at computer monitors. Computer software can include millions of lines of code, so given the nature and the volume of the work involved, mistakes are unavoidable.
These mistakes, known in tech circles as 'bugs', can cause serious consequences for customers. Reducing the number of coding bugs by any reasonable means is therefore a high priority for software companies.
Previous work to analyse the causes of bugs has focused on detecting correlations between the number of bug fixes and the quality of code after the bugs are detected.
However, Microsoft researcher Andrew Begel suggests that detecting when developers are struggling as they work could help to prevent bugs before they are introduced.
“My idea is that if the software developers are writing the code and causing the bugs, we should measure attributes of the developers themselves," said Begel.
"If we can figure out what cognitive or emotional issues lead to buggy code or lowered productivity, we can try to intervene and stop them from causing developers to make mistakes in the first place."
Together with a few academic and industrial colleagues, Begel has carried out tests using psycho-physiological sensors to measure developers' reactions to tasks.
In particular, he used eye-tracking technology, electrodermal-activity sensors (which measure changes in the skin’s ability to conduct electricity), and electroencephalogram sensors (which evaluate electrical activity in the brain).
Using this data, Begel was able to accurately predict the difficulty of a task for a new developer with a precision of nearly 65 per cent. For new tasks the precision was even greater – almost 85 per cent.
The research stopped short of suggesting possible interventions for when a developer's actions are approaching bug-potential levels.
However, Begel suggests that reducing the contrast on the display and making the fonts harder to read would force the developer to apply more brainpower to read and understand the code.
"We’re still at the experimental stage, learning to understand what all these sensors are telling us about the software developer," said Begel.
"If we can successfully learn a pattern that produces appropriate interventions at the right times, then the proof will be in the utility of the resulting tool."
Responding to the research, Amichai Shulman, chief technology office at security firm Imperva, questioned the effectiveness of this method, describing it as 'tremendously intrusive'.
He said that one of the main reasons for software flaws today is that programmers are constantly under pressure to deliver more functionality in less time, and Begel's system would only increase this pressure.
"If we introduce a system that constantly holds back on programmers because they are stressed for some reason, we will effectively introduce unbearable delays into the project which will of course put more pressure on those who perform the job when schedule becomes tight," said Shulman.
"This is of course ignoring the fact that to some extent we want our programmers to be 'over' challenged by the problems they have to solve in code in order to keep them 'sharp' and happy with their job."
He added that Begel's system makes no distinction between critical mistakes and minor mistakes, inevitably leading to unnecessary delays.
"I'm pretty sure that the industry could take pieces of [the research] that would help us understand better why mistakes are happening and when, and therefore how to try and avoid that," said Shulman.
"However, I don't think that this is by itself an effective approach to improving software in general and software security in particular.”