YOU say there is a crisis facing software engineering. What is this?
Essentially, we're trying to do many more things with our software: we're pushing our software to the edge, and we're trying to do this without making fundamental changes to the way in which we do our software engineering.
There are these bigger systems that we're trying to build that have far more advanced functionality than we've had in the past. We're relying on systems to be self-managing, we're relying on systems to be modified using our standard practices and, as a result, we're in danger of hitting a situation where we will reach an impasse because software systems will not be able to keep up with our advancements.
Does this mean software systems must be advanced enough to update themselves?
Yes. Some of those new uses involve evolving the software itself. For example, we have situations where software has had to change and we don't just want to shut them down to change them because it could be a critical system, but we want to be able to evolve them in a graceful way that's not ridiculously expensive and yet will continue to have the increased functionality we want to have.
We want software that evolves by itself and makes some decisions by itself on how it has to change to handle new situations and new demands.
To what kind of software systems is this applicable?
An example application is a NASA concept mission: this is a mission that will run for multiple years out in the harsh environment of space and it's in such a position that we're not able to respond to situations ourselves.
By the time the system sends a message back saying that it has a problem, even if we could respond to it immediately it would be forty minutes later by the time we know because of the distance involved, so the software has to know itself what to do.
Are you talking about software that is artificially intelligent?
Self-awareness doesn't necessarily mean any form of artificial intelligence (AI). They are aware of the space they occupy, they're aware of themselves and their own constraints and the constraints of the environment.
I've been working on exploration missions with what we call swarms of spacecraft. Instead of sending one large spacecraft, you send a number of smaller ones and it has the advantage of being resilient. If one is damaged, you don't lose the entire mission.
However, they have to co-ordinate in different ways and know about what functionality they have. For example, one of the concepts we have is to send a thousand spacecraft. The problem with sending so many is they have to survive alone for five years, so you expect to lose between 60-70pc of them.
As they are lost, the survivors have to calculate what they can do as a group. If some of the data-collection devices have been lost, the others have to work out how they can get the job done without them and compensate from that.
Does this mean software systems of the future will copy nature's design?
Well, this area is called autonomic computing and it is inspired by the human autonomic system - some people prefer to call it biologically inspired computing.
It has relationships to AI, but I always tell people that it is inspired by and not copying nature. AI is trying to copy how the mind works.
© Silicon Republic Ltd