Posts from January 2012

Of plans gone awry

Saturday 28th January 2012

About 18 months ago I graduated with a pretty good degree in Civil Engineering, from a pretty good university. In the last few months, I’ve also graduated from an MSc course in one particular part of Civils, again from a pretty good university. During the last year – and particularly the last few months since I finished my masters – I’ve been looking and applying for jobs within that industry. Had a few interviews too, but for whatever reason got nowhere. Lately, I’ve become bored of being skint, so I decided to look for a part-time job, something to do/earn money while I keep applying for “proper” jobs.

And, well, that’s sort of what I’ve done. I’ve taken what is essentially a part-time job, that I know will give me work for the majority of the year. But as part time jobs go, it’s a bit good…

I’m going to be working for a tyre company. Specifically, in the motorsport part of the company. Providing support to their customers in the World Endurance Championship. So basically, I’m going to go to a load of races, getting paid for it, and getting involved in engineering some of the cars that are taking part.

So I’ll be working at races at Sebring (Florida), Spa-Francorchamps (Belgium), Le Mans, Silverstone, Interlagos (Sao Paulo), Bahrain (or wherever they decide to reschedule it), Fuji (Japan) and somewhere in China. And I’ll also hopefully be involved in a bit of testing; I already know that I’ll be going to a tyre test in February, most likely at Monza.

So, my initial plan of “finish uni, get a civil engineering job” has sort of gone awry; I don’t know if this job will lead to any future work (although I get the impression that it possibly could), and of course I’m now not entirely available to start a “normal” office-based engineering job until about November, when the WEC season ends. But, well, I can’t say I’m really complaining…

I still find it slightly amazing that I had a few interviews for jobs that I should really be ideal for given my experience and my qualifications, and got nowhere. But somehow I’ve landed a job that’s completely different to what I’ve done before. Again, I can’t say I’m moaning; at least one of the companies I’ve interviewed for in the last 6 months is now in trouble. And, er, in this job I get to work in motorsport!

And that still hasn’t really sunk in. The way I keep looking at it is: I’ve wanted to go to the Le Mans 24 Hours for years. I’m going this year, and I’ll be in the pitlane for the race. I’ve been to spectate at the Silverstone 6 Hours for the last couple of years, this year I’ll be working there. It’s all a bit unreal really.

So that’s what I’m doing this year. I’m very excited.

Posted In: EngineeringMotorsport Tagged: | 2 Comments

“Foolproof, and incapable of error”

Saturday 21st January 2012

Whilst I was thinking about and writing the previous post, a couple of extra things came to mind which I couldn’t really fit into the post. So I thought I might as well do a follow-up with a couple of extra observations. I did intend to write this earlier, but partly I was busy (more on that in a following post) and mostly I just didn’t get around to it.

1) The previous post was not so much about aeroplanes, but more about interfaces in general. Be that with machinery like a plane, or a device like a phone, or even infrastructure or services. And it struck me that one of the few organisations that consistently manages to create things with great interfaces is Apple. Not so much with their computers (I’m really not a big fan of MacOS, probably because I’m more used to Windows), but their iOS devices (iPhones and iPads) are really good examples of things which simplify tasks through good interface design.

It strikes me that if the computing business ever starts to go slack (!), Apple could do a good business out of consultancy; imagine if they applied their UI design skills to things other than making iPhones and iPads. This isn’t as daft as it sounds; some ex-Apple employees recently set up a business to make a better thermostat. That’s a specific example of someone applying the Apple approach to interfaces to a different type of product, and I’m sure there are other things which would benefit from the same approach.

2) For some reason, I also started thinking about 2001: A Space Odyssey (spoilers follow. Although, it’s a 40-odd year old book/film, so I guess most people at least vaguely know the plot. If you don’t, then go read the book and watch the film. They’re classics). The first – obvious – point is that a lot of the interfaces in that film do appear to tend towards simplicity. There’s loads of little things: the video phone booth that Dr Floyd uses near the start of the film, the tablets that Bowman and Poole use on Discovery, all the spaceship status screens look like they’re intended to be simple, and of course there’s HAL9000

On the topic of HAL, it occurred that his demise is pretty relevant too. HAL was programmed to help the crew, to convey information to them about Discovery and about the status of the mission. But before the crew left Earth the parameters of the mission were changed; this was secret, and the crew were not to be told until Discovery reached Jupiter. As the central computer, HAL knew the real purpose of the mission, but was not allowed to tell the crew. He was being asked to hide information, to lie. This ran counter to HAL’s programming – he was designed to give information, not to hide it – and because of that conflict he perceived there to be a problem. Which he then set out to rectify…

The point is, HAL failed because the people who defined his tasks for the mission did so incorrectly. The computer carried out its tasks as it saw best, but those tasks were in conflict with each other. And so the failure of the mission was the result of misuse of the computer. Now obviously the details in this and in the example in the previous post are very different, but in general, it’s the same fault: the computers behaved exactly as they were asked, the error arose from the way people were trying to use them.

And, really, how clever is that? That 40 years ago, people were thinking about how we’ll be using these ultra-sophisticated computers, and were (in a very broad sense) predicting some of the problems that we’re starting to see. Just makes me realise how great a job Clarke (and Kubrick, I think) did in writing that story, and how many ideas they’ve managed to pack into it. I’ve read the book many times already, but I really need to re-watch the film.

Posted In: GeekMoviesTechnology Tagged: | No Comments

The perils of poor UI

Sunday 15th January 2012

You might remember that a couple of years ago, an Air France plane crashed into the Atlantic. Recently, Popular Mechanics ran an article which explained the causes of the accident using data from the aircraft’s black box. In the immediate aftermath of the accident, it was assumed that something on the aircraft must have failed as it passed through a storm. In fact, that turned out to be wrong; the aircraft was mostly fine, and the pilots “flew a perfectly good plane into the ocean”.

According to the article (which is fascinating, I really urge you to read it), the pitot tubes on the surface of the aircraft (airspeed sensors) became iced over, which meant that the pilots lost the airspeed indicator. Without this data the autopilot couldn’t fully function, and so it partially disengaged. While this went on, one of the pilots decided to put the plane into a climb, which caused it to stall (a sudden reduction in the amount of lift generated by the plane’s aerofoils). When this happened, the pilots tried to continue climbing; the wrong response, and it ultimately caused the plane to lose altitude.

The pilots on commercial aircraft such as this are highly trained, so why, when the plane started to stall, did one of them do precisely the opposite of what he should have done?

‘…the reason may be that they believe it is impossible for them to stall the airplane. It’s not an entirely unreasonable idea: The Airbus is a fly-by-wire plane; the control inputs are not fed directly to the control surfaces, but to a computer, which then in turn commands actuators that move the ailerons, rudder, elevator, and flaps. The vast majority of the time, the computer operates within what’s known as normal law, which means that the computer will not enact any control movements that would cause the plane to leave its flight envelope. “You can’t stall the airplane in normal law,” says Godfrey Camilleri, a flight instructor who teaches Airbus 330 systems to US Airways pilots.

But once the computer lost its airspeed data, it disconnected the autopilot and switched from normal law to “alternate law,” a regime with far fewer restrictions on what a pilot can do. “Once you’re in alternate law, you can stall the airplane,” Camilleri says.

It’s quite possible that Bonin had never flown an airplane in alternate law, or understood its lack of restrictions. According to Camilleri, not one of US Airway’s 17 Airbus 330s has ever been in alternate law. Therefore, Bonin may have assumed that the stall warning was spurious because he didn’t realize that the plane could remove its own restrictions against stalling and, indeed, had done so.’

In normal flight, the computer systems try to make it easier to fly the plane. But once the computers stopped getting inputs from some sensors, those systems disengaged and so altered the behaviour of the aircraft. And so it’s conceivable that efforts to make the plane safer by making piloting the aircraft easier – by simplifying the controls and handing some responsibility to the computers – may have actually contributed to this accident. There could be a number of reasons for that; the change from normal to alternate law may have been unintuitive or non-obvious to the pilots. Or perhaps its simply that taking some of the responsibility for flying the plane away from pilots for the majority of the time causes them to become complacent – to think that the plane couldn’t stall – or meant that they weren’t sure how to react when the computers couldn’t help them. How sensible is it to introduce inconsistent behaviour into any control system, let alone that for a commercial aircraft?

Hang on though, there’s more than one pilot flying the plane. When the aircraft began to stall one of them behaved incorrectly, but this is partly why there’s more than one pilot. Why didn’t the other pilot spot the mistake, and do something to solve it? Well, the Popular Mechanics article also picks up on another part of the plane’s control mechanisms which may have contributed to this:

‘Unlike the control yokes of a Boeing jetliner, the side sticks on an Airbus are “asynchronous”—that is, they move independently. “If the person in the right seat is pulling back on the joystick, the person in the left seat doesn’t feel it,” says Dr. David Esser, a professor of aeronautical science at Embry-Riddle Aeronautical University. “Their stick doesn’t move just because the other one does, unlike the old-fashioned mechanical systems like you find in small planes, where if you turn one, the [other] one turns the same way.” Robert has no idea that, despite their conversation about descending, Bonin has continued to pull back on the side stick.’

The two pilots didn’t know what each of them were doing. So one pilot was pulling back on the controls – the wrong thing to do in a stall – and the other one had no idea.

One one level, I’m simply amazed that this can happen, that the pilots can be unaware of what each other is doing. But then, I also imagine that this is a fairly stressful situation – being thrown around in an aircraft during a storm, with all sorts of alarms sounding – and that within that situation, irrespective of your training, it’s kind of easy to make a mistake.

What I mostly find interesting about this accident is that it was essentially caused by human error, and by the way that humans interact with the aircraft. By that, I mean that the pilots made several mistakes; they shouldn’t have been near the storm in the first place, and they should have acted differently once they reached the storm. But those human errors were, in part, brought about or exacerbated by the aircraft’s control systems.

In other words, this accident was in no small part caused by poor user interface design. The built-in inconsistency between normal and alternate law which possibly confused the pilots at a time when they didn’t have the capacity to deal with the confusion, and the asynchronous controls which hindered communication between the pilots. Because of these things, competent pilots flew a perfectly operable aircraft into the Atlantic Ocean.

These are things that most engineers probably wouldn’t think about. We’re technical people – that’s why we’re engineers – so we think about numbers, about science, about the basic mechanics that underlie how something works. But that’s not the only thing that’s important about a design; it’s also important to consider how people are going to use the thing you’re making. This is applicable to most designs, whether you’re making an aircraft or a building or a phone.

In this case we’ve seen something particularly interesting happen, since the designers have tried to make the plane easier to fly, by delegating some control to autopilots in normal law. But it appears that simplifying the controls might have actually contributed to the accident by confusing one of the pilots. This seems somewhat unintuitive; if we make something easier to use then it seems fair to reason that we also reduce the likelihood of someone using it wrong, and so make it safer. But it’s human nature to be lazy, so when you tell someone that they ordinarily don’t need to think about a particular variable, then they probably won’t think about it at all.

Now I don’t point this out to make an argument against any of the control systems that Airbus build into their aircraft (although, really, asynchronous controls? Isn’t that just obviously a bad idea?), or against making things simpler. Airbus probably know what they’re doing (“probably” being the operative word*). The point is that working out how someone will use something is just as important as figuring out how to make something work; it’s something that should be obvious, but that I suspect is often seen as a secondary consideration.

The designers of the aircraft obviously have thought about this, and their solution was to try to make it simpler to use by hiding some of the complexity from the pilots in normal law; but does that really make it simpler to fly the plane? Perhaps in normal flight, but I suspect that we’d like our aircraft to be designed with abnormal flight in mind as well. And in that situation, perhaps what would really make things simpler is a way of helping the pilots to deal with the complexity, rather than trying to shield them from it and presenting them with unnecessary changeability when that is not possible.

* The Airbus A380 is big, and so Airbus tried to make it as light as possible. To do that, they’ve used carbon fibre reinforced plastic in certain parts of the structure, notably the wings. Carbon fibre: light! strong! stiff! notoriously brittle! Er, hang on a minute…

When I read that they’d used composites, I wondered whether it’d be a great idea. My main concern was durability: would the material start to crack after a certain number of cycles? Imagine my absolute lack of surprise when it was reported recently that Qantas and Singapore Airlines have discovered that there are cracks on the wings of some of their A380s… Airbus say that it’s not important, the cracks are on non-critical parts of the aircraft (although.. really? I highly doubt that it’s designed to crack). I’m sure they’re right, but it’ll be an interesting one to watch.

Posted In: Engineering Tagged: | 5 Comments