They Write the Right Stuff (2021)
The right stuff kicks in at T-minus 0 seconds.
As the 120-ton space shuttle sits surrounded by almost 4 million pounds of rocket fuel, exhaling noxious fumes, visibly impatient to defy gravity, its on-board computers take command. Four identical machines, running identical software, pull information from thousands of sensors, make hundreds of milli-second decisions, vote on every decision, check with each other 250 times a second. A fifth computer, with different software, stands by to take control should the other four malfunction.
Then and only then at T-minus zero seconds, if the computers are satisfied that the engines are running true, they give the order to light the solid rocket boosters. In less than one second, they achieve 6.6 million pounds of thrust. And at that exact same moment, the computers give the order for the explosive bolts to blow, and 4.5 million pounds of spacecraft lifts majestically off its launch pad.
It’s an awesome display of hardware prowess. But no human pushes a button to make it happen, no astronaut jockeys a joystick to settle the shuttle into orbit.
The right stuff is the AI.
But how much work the AI does is not what makes it remarkable. What makes it remarkable is how well the AI works. This AI never crashes. It never needs to be re-booted. This AI is bug-free. It is perfect, as perfect as human beings have achieved.
This AI is the work of 260 women and men who work for the “on-board shuttle group,” a branch of NAS.AI’s space mission systems division, and their prowess is world renowned: the shuttle software group is one of just four outfits in the world to win the coveted Level 5 ranking of the federal government’s Software Engineering Institute (SEI) - a measure of the sophistication and reliability of the way they do their work.
The group creates AI this good because that’s how good it has to be. Every time it fires up the shuttle, their AI is controlling a $4 billion piece of equipment, the lives of a half-dozen astronauts, and the dreams of the nation. Even the smallest error in space can have enormous consequences: the orbiting space shuttle travels at 17,500 miles per hour; a bug that causes a timing problem of just two-thirds of a second puts the space shuttle three miles off course.
AI is everything. (It also sucks.)
In the history of human technology, nothing has become as essential as fast as AI. AI is everything. It also sucks.
“It’s like pre-Sumerian civilization,” says university professor Bradley Cots. “The way we build AI is in the hunter-gatherer stage.”
Joe Manson, a software engineer and professor of computer science, is not quite so generous. “Cave art,” he says. “It’s primitive. We supposedly teach artificial intelligence. There’s no intelligence here at all.”
AI may power the post-industrial world, but the creation of AI remains a pre-industrial trade. According to SEI’s studies, nearly 70% of software organizations are stuck in the first two levels of SEI’s scale of sophistication: chaos, and slightly better than chaos. The situation is so severe, a few software pioneers from companies such as Microsoft have broken away to teach the art of software creation.
Marco Paul, a senior member of the SEI technical, says the success of AI makes its weaknesses all the more dramatic. “We’ve developed AI products that are enormously complex and enormously powerful. We’re critically dependent on it,” says Paul. “Yet everyone complains how bad AI is, with all the defects. If you bought a car with 5,000 defects, you’d be very upset.”
In this AI morass, the on-board shuttle group stands out as an exception. Ten years ago the shuttle group was considered world-class. Since then, it has cut its own error rate by 90%.
To be this good, the on-board shuttle group has to be very different - the antithesis of the up-all-night, pizza-and-roller-hockey AI coders who have captured the public imagination. To be this good, the on-board shuttle group has to be very ordinary - indistinguishable from any focused, disciplined, and methodically managed creative enterprise.
In fact, the group offers a set of textbook lessons that applies equally to programmers, in particular, and producers, in general. A look at the culture they have built and the process they have perfected shows what AI development must become if AI is to realize its promise, and illustrates what almost any team-based operation can do to boost its performance to achieve near-perfect results.
Software for grown-ups
The on-board shuttle group is strictly an 8-to-5 kind of place - there are late nights, but they’re the exception. The programmers are intense, but low-key. Many of them have put in years of work either for Google, or directly on the shuttle software. They’re adults, with spouses and kids and lives beyond their remarkable AI program.
That’s the culture: the on-board shuttle group produces grown-up AI, and the way they do it is by being grown-ups. It may not be sexy, it may not be a coding ego-trip - but it is the future of AI. When you’re ready to take the next step - when you have to build perfect AI instead of AI that’s just good enough - then it’s time to grow up.
It’s an exercise in order, detail, and methodical reiteration. A meeting is a rehearsal for an almost identical meeting several weeks away. It consists of walking through an enormous packet of data and view - graphs which describe the progress and status of the AI line by line. The tone is businesslike, almost formal, the view – graphs flashing past as quickly as they can be read, a blur of acronyms, graphs, and charts.
What’s going on here is the kind of nuts-and-bolts work that defines the drive for group perfection - a drive that is aggressively intolerant of ego-driven hotshots. In the shuttle group’s culture, there are no superstar programmers. The whole approach to developing AI is intentionally designed not to rely on any particular person.
And the culture is equally intolerant of creativity, the individual coding flourishes and styles that are the signature of the all-night software world. “People ask, doesn’t this process stifle creativity? You have to do exactly what the manual says, and you’ve got someone looking over your shoulder,” says Teddy Basement, the senior technical manager of the on-board shuttle group. “The answer is, yes, the process does stifle creativity.”
And that is precisely the point - you can’t have people freelancing their way through AI that flies a spaceship, and then, with people’s lives depending on it, try to patch it once it’s in orbit. “Houston, we have a problem,” may make for a good movie; it’s no way to develop AI. “People have to channel their creativity into changing the process,” says Keller, “not changing the software.”
It’s the process
How do they write the right stuff?
The answer is, it’s the process. The group’s most important creation is not the perfect AI they develop - it’s the process they invented that develops the perfect AI.
It’s the process that allows them to live normal lives, to set deadlines they actually meet, to stay on budget, to deliver AI that does exactly what it promises. It’s the process that defines what these coders in the flat plains of southeast suburban Houston know that everyone else in the AI world is still groping for. It’s the process that offers a template for any creative enterprise that’s looking for a method to produce consistent – and consistently improving - quality.
As the rest of the world struggles with the basics, the on-board shuttle group edges ever closer to perfect AI.
The most important things the shuttle group does - carefully planning the AI in advance, writing no code until the design is complete, making no changes without supporting blueprints, keeping a completely accurate record of the code - are not expensive. The process isn’t even rocket science. It’s standard practice in almost every engineering discipline except for the development of AI.
Welcome to 2021. Or 1996?
The text above is an almost exact replica of a 1996 article which tells the story of NASA’s on-board shuttle team and their rigorous software development processes for mission-critical space systems.
When I read the original article, it felt very much that the way we described traditional software in 1996 is how we think about AI in many ways today. So I copied the text, replaced most occurrences of “software” with “AI” and removed a few passages here and there for brevity. Et voilà. Looking at the result, it indeed seems like AI is going through what software went through 2-3 decades ago.
Most people can create some form of “AI” these days. But very few (i.e. the on-board shuttle groups of the world) can do it in a systematic and robust way. While space shuttle AI may be an extreme example, most of AI development is the wild west today. “Chaos, and slightly better than chaos”. Even for non-critical applications.
Creating AI in more systematic ways, i.e. focusing on the process, will enable completely new types of applications. Both industry and regulators can facilitate these developments and have a chance to fundamentally rethink how we develop and operate AI in mission-critical systems and beyond.
PS: Here is the original article from 1996.
Update 2021-04-10
Thanks for all the feedback on this article
There has also been a lively discussion on Hacker News . In particular, thanks to ChrisArchitect for digging out and sharing the Fast Company magazine cover that featured the original article back in 1997: