I was trained as an operations researcher, both in my PhD with Horacio Yanasse (PhD, MIT OR center) and in my MSc with José Ricardo de Almeida Torreão (PhD, Brown Univ Physics).
An operations researcher is a decision scientist, a mixture of an economist with an engineer. Or a mixture of a computer scientist with business administrator. The basic idea is to find a business problem, to build a mathematical model of it, then solve it, as in obtaining the lowest-cost solution, or highest profit one. A model usually looks like this (rather simple one):
During all those years, of course I made an incredible lot of friends working in operations research.
Can you imagine what operations researchers talk about when they're not doing OR? When they're having dinner or a cup of coffee?
It usually goes like this:
"God, I still don't get that."
"This thing, man, it's so depressing."
"You know, the fact that industry practically ignores what we do. We keep on here doing amazing work which could save millions, perhaps billions of dollars, and we're practically ignored by industry. It's so hard to see a successful application in the real world. Why? How can this be? Isn't it unbelievable?"
All conclusions we have reached in the past were due to "the others". Businesspeople are just stupid. They can't grasp this. Or maybe that classic: "They'll spend 10 million in advertising to make 11 million, but they won't spend 1 million to save 10 million. Stupid, stupid, people".
At first I thought it was basically a Brazilian issue. The Brazilian OR community is strong; there's really world-class people in it. But it is hardly applied to industry around our jungles.
It's a worldwide phenomenon. Americans, Japanese, and Europeans share the same complaints.
So after many years I have come to a different conclusion. It's not that businesspeople are stupid. In fact, quite the contrary (Hopefully none of my friends still in the field will read this--but it's true).
OR isn't applied because of the nature of the work.
An OR model can indeed save billions of dollars--as some industries, such as airlines, have found out. But the problem lies in the static nature of models versus the dynamic nature of things. It doesn't reflect what the real world is like.
Let's say you've spent some years and developed a really groundbreaking model to solve, for example, fleet assignment. Airlines have numerous types of planes, each with particular carrying capacities, fuel consumption, flight range, and maintenance restrictions. How do you assign each one of your aircraft to each one of your flight legs while minimizing costs? That's a mathematical problem with a huge number of possibilities, an NP-Hard problem, which demands enormous computational effort and can only be solved to optimality if the dataset is small.
After you have a working system, then the problem becomes clear. If and when the rules of the game change, your math model doesn't reflect the new reality. It either has to have more restrictions, or, in the most usual of cases, it has to be rebuilt from scratch, with a whole new dynamics. Models are cast in stone, and business life shifts almost as rapidly as a politician's reputation. Airlines have been able to use models, as have other industries, but mostly in real life, the music is always changing and models can't dance according to the tune.
I wrote about machine translation as an avenue for computational cognitive scientists to make an impact in technology. Here's another one.
I've called it for years as "autoprogramming", and it is, I guess, a long lost dream for computer scientists. Imagine a model which is able to self-destruct automatically, as context has changed. A model which is able to self construct according to the new tune of the moment. This requires an immense amount of perception, learning from feedback, flexible adaptation, a high-level, abstract view of what's going on, and other stuff which shows up, for example, in the Copycat project, but is far, far away from current OR/management science.
This type of self-reorganizing model should, in principle, exhibit a whole spectrum of cognitive abilities. It should understand what's going on. As of today, it is pure science fiction. But it can be done, specially if one starts from restricted domains which can change only within some small boundaries.
There's a lot of research going on to make solution algorithms more flexible and adaptable, on meta-heuristics and on meta-meta-heuristics; however, it's one thing to have flexible solution methods, and another thing entirely to have a flexible diagnosis/model/solution system. The fact that the models and problems are changing practically weekly makes it hard to the extreme that industry will ever adopt them in a true large-scale manner.
This is largely unexplored territory, and cognitive technologies are specially suited to explore it. If a nurse can go through the diagnosis/model/solution cycle in the furiously fast changing scenario of a baby turning blue, then we know that it's possible, in information-processing terms, to do it. For the time being, "autoprogramming" is used in the ridiculously simple task of re-programming an RF tuning device after a power failure.
Meanwhile, the real thing I'm daydreaming here remains the stuff of science fiction.