Book News: Review

The enigmatic Elephantstrunk has some good things to say about No One Makes You Shop At Wal-Mart. I hope he or she does not mind if I reproduce it here.

Tom Slee’s “No one makes you shop at Wal-Mart”
should be bundled with every copy of “The Wisdom of Crowds”. I like
“The Wisdom of Crowds” but it always seemed dangerously incomplete.
NOMYSAW is not exactly a counter argument but shows that life is a lot
more complicated than the TWoC might suggest.
Mr. Slee is clearly exercised by the free ride given to arguments
resting on offers of “choice”. Starting with the Prisoner’s Dilemma the
book shows ways that the seemingly obvious good of giving people the
opportunity to decide what is best for themselves can sometimes make
everyone worse off.

It’s not an original point but nor is it in fact a controversial
one. What is new is the articulation of what it means and what some
popular arguments don’t. The book uses the Prisoner’s Dilemma as a
simple example of individuals each seeking to maximise their personal
situation and suffering as a result but the most potent are at some
level examples of “The Tragedy of the Commons” but in places where the
Coasian solution of applying property rights is a less comfortable
propositions.

Finding a whole book, especially one so articulate and clear, about
a persistent but only half formed idea is quite a thrill. I liked it a
lot. Whimsley mentioned on the right is the author’s occasional blog.

Thanks very much Mr/Ms Trunk.

Book News: Speaking Engagements

Maybe it’s time to say something about the book again. I’ve been invited to give a few talks recently – some have already happened, and some coming soon. Thanks to those who have invited me – the ones so far have been very rewarding (for me at least).

Car Free Day was an event sponsored by WPIRG in September. They invited me to speak at the outdoor event in Victoria Park and also at the University. Speaking outside with a small audience is difficult – the surrounding noise makes it feel as if you are shouting at people sitting a few feet from you. If you weren’t there to hear me, I don’t think you missed anything. The University event was better, with about 20 people there and some good discussion afterwards. Most interesting was a comment about an intriguing high-tech public bicycle system in Lyon named Velo’v, which seems to be a big success according to a Guardian article reprinted here.

Kitchener NDP recently hosted Peggy Nash, a new MP for Parkdale- High Park in Toronto. I was very impressed. She spoke for 45 minutes with no notes on a wide variety of topics, from a recent fact-finding visit to Lebanon to the ins and outs of Parliamentary committees and was obviously smart and very well informed. I was asked to present her with a copy of my book as a thank you for the visit, which was a privilege.

After the meeting I met two philosophy profs from the University of Waterloo. Dave DeVidi is  using No One Makes You… in a Decision Theory course, and Tim Kenyon may be using it next year in a Critical Thinking course.

In a week and a half I’ll be speaking as part of a panel at an event at York University named Social Justice: From Rhetoric to Action put on by the Centre for Social Justice.  The programme is still in a draft form – I’ll post more when it gets closer. It’s a challenge to condense a piece of the book into an edible-sized chunk for talks, but  I think I’m slowly getting better at it.

Finally, I’m an invited speaker next month in an Engineering and Society course that is being run at McMaster University as part of the Peace Studies program. It is called "War and Natural Resources: The Case of Oil" which is being run by Graeme MacQueen and Jack Santa Barbara. I’m sure the course is a fascinating one – I just hope I can hold people’s interest.

One trouble with writing a book that covers quite a wide range of topics is that you don’t get to give the same talk twice — transit, social justice, and warfare. But the best way to learn more about something is to tell others about it, so this has been very rewarding.

Quantum Computing: Gripes from a Quantum Fuddy-Duddy

Waterloo is an interesting place to live these days for an ex-quantum-mechanic, mainly because of all that techno-geek BlackBerry money that is being splashed around. When I bike to work I go past the Perimeter Institute in Theoretical Physics at the beginning of my ride, and then go within a stone’s throw of the Institute for Quantum Computing at the end, both of which are making waves these days. All these brainy young things pushing the boundaries of what we know and don’t know, it’s fun to watch.

The main media event recently has been the publication of the book “The Trouble with Physics” by the Perimeter Insitute’s Lee Smolin, which is a criticism of String Theory and its 30-year failure to prove itself as something more than a promising candidate for a theory of everything. Smolin’s book has been widely reviewed, often in conjunction with Peter Woit’s “Not Even Wrong” which argues much the same thing. The title “Not Even Wrong” was a devastating putdown coined by Enrico Fermi of another physicist’s work – the implication being it was so mistaken that you couldn’t even show why and how it was incorrect. My own work was in the relatively mundane work of molecules rather than cosmological elementary particles; that is, it was quantum, but on our side of the Heisenberg Uncertainty Principle, not the other side. Both books argue that String Theory has become so removed from experiment that it has ceased to be science, and has become prone to building elaborate edifices on an insufficiently sound physical basis.

While PI has made the biggest splash locally, the Institute for Quantum Computing is a rising star too. And Quantum Computing is something that I feel I can get more of a handle on than String Theory, so I’ve been reading a bit about it. And what I see either exposes me for a quantum fuddy-duddy or suggests that Quantum Computing (QC from here on) could do with listening to the critiques of Smolin and applying it to themselves.

So here is what’s wrong with quantum computing from what I can see. (Disclaimer: these opinions are based on a number of popular articles, my attempts to follow Scott Aaronson’s excellent weblog Shtetl-Optimized, and the bit I have read of Jozef Gruska’s 1999 text “Quantum Computing”, which I actually got out of the library last week. These opinions are worth exactly what you paid for them.)

Reading a QC book is an odd thing for a regular physics type. Where are the Hamiltonians? Few and far between, it seems (“Hamiltonian” is mentioned in only a literal handful of places — five — in Gruska). Everything is about the action of unitary operators on states. For anyone reading this who is not a physicist, here’s what that means. In physics, when things interact, that interaction is described by a Hamiltonian. If you’re trying to solve a problem, your first step is to construct a Hamiltonian operator that describes the interaction you are studying, and then solve the equations that come from that. Unitary operators, on the other hand, describe a change in a quantum system from one state to another, which may sound like the same thing but isn’t. In regular computers, circuits consist of many gates that carry out elementary operations (AND, OR, and so on). If you had a quantum “gate” then a unitary operator describes the change from input to output (that’s the
change in state), and ignores the way in which the system gets from the one end of the gate to the other. The Hamiltonian would describe the actual physical operation that happens as the light or electron or nuclear spin goes from the input to the output.

That still might not be very clear, so here’s the end result: the theory of QC is built in a manner that has become deliberately divorced from the real world. The “implementation” or the actual building of a computer is consciously separated from the theory of the logic and algorithms that you would build on top of this computer. There is a division of labour between theory and experiment that is present right down to the way the theory is constructed, and that is a bit of a problem because it means that every single thing that the theorists say is conditional. All the
theorems they prove need a big asterisk next to them saying “as long as someone can make a computer”. As a footnote Richard Feynman, who knew what he was doing, did some early explorations in Quantum Computing and did approach it by building Hamiltonians and putting together Schrodinger equations – just what you’d expect a physicist to do.

You can see why the discipline has gone in this manner. Regular old classical computing, after all, followed a similar path. It started off with mathematicians (von Neumann and Turing) who set out a logical architecture of computers and algorithms, and this work was followed (loosely speaking, and as best I understand) by the engineers once the transistor was invented. And it makes sense that those interested in exploring algorithms can do so without needing to know all about welding – this stuff is complicated enough as it is. But we should keep in mind that there is always the possibility that the success of regular computing will not be repeated. The proof of the pudding and all that.

The second thing that bothers me about QC is related to this division of labour. Everywhere you look they are talking about “entanglement” and all the things that go along with it: Bell’s inequalities, EPR experiments and so on. In Gruska’s book Everett (of the “many-worlds interpretation”) is mentioned as many times as Hamiltonian. Again, for a quantum fuddy-duddy this raises red flags. The wierdness of the quantum world is seductive – enough that my 1st year lecturer (the late Peter Dawber) felt he had to warn us “this is interesting, but it’s interpretation. My advice is learn how to calculate and solve problems, and don’t get stuck in the philosophical quicksand”. Good advice that has been repeated by many a lecturer, I’m sure. And yet here are these QC-ers diving headlong into entangled states, and spending more time on them than on things with actual Hamiltonians. Looking in Gruska’s index again, entanglement merits 42 mentions. Perhaps this is mainly a rhetorical point, but I do think it is worth making because entanglement is built into the culture of quantum computing.

Entanglement is connected to what happens when you prepare a multi-particle quantum state and then let the particles become separated. So you get paragraphs like this:

Prepare a system with two particles, each of which can have two values of spin, and send them off in opposite directions. Then measure one of the particles to find its spin. This measurement then immediately fixes the spin of the other particle, even though it’s a long way away. The state of the two particles is entangled.

But you could also write this paragraph this way.

Prepare a system with two particles, each of which can have two values of spin, and send them off in opposite directions. Then measure the state of the system by checking one particle. This measurement tells you the spin of both particles.

The difference is that in the second phrasing avoids mention of entanglement, and focuses on the fact that this is a single quantum system we are talking about here, and that however far apart the two particles end up being, they still have to comprise a single quantum system, and that means no messing from the classical world. The two paragraphs refer to the same operations and mathematics, but the second avoids extraneous weirdness.

The two most mature methods of actually preparing quantum computers are NMR spectroscopy and ion traps. Ion traps deal with fine control of isolated quantum systems (as required for “entanglement”) and involves very exotic and incredibly precise experimental apparatus. This is what you would expect if you are dealing with single systems: the expense of dealing with them grows as the size of the separation grows. NMR deals with many systems (coffee cups, for example – the link is to a PDF file) and uses the fancy techniques of pulsed magnetic resonance to give these systems a variety of kicks. It’s recently been extended to 12 “qubits” (individual spins), which is the biggest quantum computer to date.

But here is the odd thing, this most successful technique is not based on the single systems that are needed for entanglement. In fact, these same multi-pulse NMR experiments have been carried out for some years and the word “entanglement” never raised its head so far as I know until the Quantum Computation people got interested. In NMR you are dealing with qubits that are not
spatially separated (they are nuclei on the same molecule) and you are not dealing with a single quantum system described by a single state vector (you are dealing with a thermodynamic ensemble of quantum systems described by a density matrix). Myself, I could never follow the theory of multi-pulse NMR, but I’m pretty sure there was no mention of many-world interpretations in it.

So QC should realise that the consequences of “entanglement” are limited to inherently exotic systems with which you are unlikely to be able to build a real computer. It’s useful in PR material to highlight the weirdness of the quantum world, but when talking science you should follow Occam, who said “avoid talking weirdness whenever possible”. For example, when QC people talk about “maximally entangled states” or “Bell states” they simply mean a state in which you’ve measured spin along one axis when you’re going to measure spin along another axis later. This can be talked about without reference to Bell or entanglement.

This thing with entanglement shows up in all kinds of popular articles by the practitioners of WC. As one example, here is a popular piece by some of the theorists (Steane and van Dam) who
have demonstrated that you can use entanglement to enhance communication – that you can exploit the entanglement of a quantum state together with regular messages among observers at different places to get more efficient communications. The article reads in a fun enough way (it’s about participants at a game show who each carry their own little qubit into a separate cubicle and then pass messages). But it’s not possible. You can’t carry a qubit around because in order to exploit entanglement you have to have a single quantum system. This kind of writing comes from thinking about measurement according to the first way I wrote the paragraph above (measure the particle) as opposed to the second (measure the system).

Perhaps I’m being overly picky about this because it is, after all, just a popular article (although it works in some concepts you would need undergraduate physics to understand before the end). But I can imagine QCers saying, “well, it’s possible in principle”. But like a lot of “in principle” arguments I don’t buy it. I think Daniel Dennett dealt with this kind of argument in his brilliant “Consciousness Explained” when discussing the idea of a “brain in a vat” (aka the Matrix) where philosophers argue that you could “in principle” recreate the sensations of the world by stimulating the right portions of the brain. Dennett basically calls their bluff and accuses them of not thinking through the magnitude of the problem, and takes some time to spell out just how hugely implausible it is. Now it’s not a proof, but I think the same kind of thing applies to these popularizations of entanglement. You can “in principle” have widely separated parts of a single quantum state outside a hugely expensive laboratory only if you don’t think too hard about what the endeavour entails. In a sense, we’re back to the use of Unitary operators by the theorists so they don’t
have to think about implementations.

Well, this has been more rambling than I expected, so here’s a summary of what I see as the main points.

  • The split between abstract theory and physical implementation in the structure of quantum computing is a dangerous game. It means that everything that quantum computing theory says needs to be taken with a big pinch of salt until realistic quantum computers are demonstrated. The widespread use of the rhetoric of entanglement and other ideas that focus on the non-intuitive parts of quantum mechanics exacerbates the problem by pulling QC theory further away from actual implementations.
  • The fact that the biggest quantum computers to date are NMR based demonstrates how little entanglement adds to the actual theory of QC. And the fact that the best alternative is the inherently exotic approach of ion traps is disheartening.

I hope I’m wrong. There’s a lot of smart people working on quantum computing who I’m sure have thought through these issues more than I have, and they look like in some ways they are making progress (see here). But here are two predictions that will show whether I’m right or wrong in a few years. One is that what constitutes a major advance will be redefined. The participants in a field are always enthusiastic about the major advances that are happening, but if we see major experimental advances that are phrased in terms like “enhance the understanding of what is necessary for quantum computation” rather than “actually compute something” then watch out. Second, the goals (PDF)  set out by some people in the field will not be achieved.

Well, that’s my Canadian Thansgiving ramble. Now I’m going to plant some tulips, which, with a bit of luck, will appear simultaneously, as if by magic, in a coherent fashion next spring.

Update: A recent post at Shtetl Optimized discusses a paper that has some of the same criticisms as my post here, except done properly: “Is Fault-Tolerant Quantum Computation Really Possible? by M. I. Dyakonov.” Fuddy-duddies unite!

No Attack On Iran

It is important that Seymour Hersh exposes the rumblings from various parts of the US government about a potential attack on Iran, but on this occasion I’ve felt for some time that it’s not going to happen. It’s not that Bush, Cheney, Rumsfeld and so on wouldn’t do such a thing – personally I think they’d do it in a heartbeat if it gained them a few points in the polls – but that they can’t even if they want to. It’s getting close to the end of Bush’s second term, he doesn’t have the personal clout any more, and Iraq is such a complete and utter catastrophe that the response to any further military adventurism would, I think, be swift and damning. Now this wouldn’t reassure me if I was sitting in Tehran, but that’s how it looks from here.

And now someone with some actual knowledge says the same thing. The Yorkshire Ranter is someone who seems to know his military logistics stuff, and also comes from God’s Own County, so he can hardly be wrong, and he argues that the US just doesn’t have the needed stuff in the area to carry out any attack on Iran.

His recent posts have been excellent – I especially like his unified theory of stupidity on terrorism  where he starts off with this:

I’m beginning to think that it’s possible to
discern so many similarities between really stupid opinions on
terrorism that we can call it a theory. Specifically, if you’re talking about state sponsorship, you’re probably wrong, unless overwhelming evidence contradicts this.
As far as I can tell, the modern version of this theory originated in
the late 1970s or early 1980s. It had been about – Shakespeare has a
character in Richard II allege that "all the troubles in our
lands/have in false Bolingbroke their first head and spring" – but the
strong form seems to have originated then.

Key features are that
1) terrorist or guerrilla activity is never the work of the people who
appear to carry it out, 2) instead it is the work of a Sponsor, 3) that
only action against the Sponsor will be effective, 4) even if there is
no obvious sign of the Sponsor’s hand, this only demonstrates their
malign skill, and 5) there is evidence, but it is too secret to
produce. In the strong form, it is argued that all nonconventional military activity is the work of the same Sponsor.

and his Recidivist with alert populations, where he says this:

try out the following quote from one Robert Mocny, director of the USVISIT program at DHS:

"We cannot allow to impediment our progress the privacy rights of known criminals."

The law is what I say it is, and you’re either with us, or you’re with the terrorists. Perhaps literally
with them, in the cells. Joseph Sensibaugh, manager of biometric
interoperability for the FBI, meanwhile opines that "It helps the
Department of Homeland Security determine who’s a good guy and who’s a bad guy,"
targeting "suspected terrorists" and "remaining recidivist with alert
populations". Not to mention the president of Bolivia and a dead
bluesman, apparently.

Why does it specifically have to be
illiterate authoritarianism, by the way? What does that last phrase
actually mean, anyone? Anyway. Enquiring minds want to know more. What
was this "pilot project"? Whose records were given to the DHS? Will
they be told? What are the safeguards? Where are the guarantees?

Good questions Alex.

Scholarships: Enough With the Leadership Thing

For family reasons I’ve been looking at university scholarships. There
are those that you get if you have a certain average, and then there
are others that you have to apply for and which usually involve a
mixture of scholarship and "other stuff". And that "other stuff" is
almost always defined as "leadership". Like the Lo Family scholarship
at the University of Toronto
(http://www.adm.utoronto.ca/awd/scholarships.htm#UTscholars):

"Awarded to students who are active as leaders, are respected and considered to be well-rounded citizens in their
school and community…"

Or at Queen’s University, the D & R Sobey Atlantic Scholarship  requires
"Academic excellence, proven leadership and involvement in school or
community activities."
(http://www.queensu.ca/registrar/awards/apply/apply-scholar.html). You
get the idea. Of course there are exceptions (like the lovely John
Macara (Barrister) award at U of T: "Preference given to
applicants who can establish that they are the blood kin of the late
Mrs. Jean Glasgow, the donor of this award.") but most of the time it’s
all about the leadership.

Now
I have nothing against leaders — all successful groups need someone to
take credit for their accomplishments — but this focus on
leadership to the exclusion of all else is a crock. Apart from
being ill-defined, it does a pretty good job of saying to those 17 and
18 year olds out there that there’s only one kind of admirable person
in the world, and that’s those who join lots of things, play sports
(preferably as captain or quarterback), and Get Involved. Do we really
want a world full of the parentally-pushed, self-important,
power-hungry egotists who fill "leadership" positions as teenagers?

So,
free of charge and in search of a better future for all of us, here is
a list of scholarships I’d like to see adopted by universities:

The Wordsworth Scholarship: awarded
to students who have shown that they deeply appreciate the world around
them and pursue independent expression of their thoughts, regardless of
peer pressure.
Documentation required:
– Tear-stained copy of a letter of rejection by a former girlfriend/boyfriend
– A notebook filled with juvenile poems
– Letter banning you from school spirit club
The entrance exam will require you to sit still, in complete silence, for 30 minutes.

The Paddy Clarke Initiative Award: awarded to students with demonstrated ability to take responsibility for their own lives.
Qualifications:
– Must have lived in a two-parent home for less than half their childhood.
– Must have moved out of home at least once during their teenage years. Preference given to those who have lived in a squat.
– Preference will be given to those convicted of shop-lifting, as long as the theft was for a demonstrably useful object.

The Larkin Scholarship: awarded to middle-class students with regular parents, who have never travelled abroad. Qualifications:
– Must own and wear bicycle clips regularly
– Attendance at church or other religious institution preferred. Belief not necessary.

The Perec Prize: awarded to students who have demonstrated precocity in the realm of obscure puzzles and word games.
Qualifications:
– An essay is required which must include all of the following:
    – a list of at least 24 related items
    – a meal that is all one colour
    – a mathematical theorem
The essay must have no hypothesis or conclusion, but must include at least three words that contain all the vowels in reverse order.

The Bookworm Scholarship: awarded
to students who have demonstrated intellectual curiosity by exploring
the the world they live in through the medium of books.
Documentation required:
– Dog eared copy of at least three major works of literature.
– Must have read at least two books banned by major school boards.

The ability to articulate your ideas clearly demonstrates that you
really don’t understand the complexity of the world and will exclude
you from this scholarship.

YouTube is a phenomenon

Regular readers may remember that I posted on YouTube a little video of some caterpillars in my front yard a few months ago. I went back there a few days ago and took a look at how many times it had been viewed.

OVER 160,000.

This is obviously a much bigger audience than I will get for anything else I do in my entire life. I’m sure there is a moral in here somewhere, but I have no idea what it is.

Software Featuritis, or Why Checklists are Bad

Here is a common-sense approach to software development, which I’ll call the checklist approach:

  1. Propose a new feature.
  2. Ask if the feature is useful.
  3. If the feature is useful, implement it.

This essay uses shows why the checklist approach fails: why adding useful new features to a software product can make the overall product less useful to end users. This is a phenomenon I like to call  featuritis.

At its simplest, a software product is a set of n independent features (i = 1,…, n). Each feature has a utility to the customer of ui. The overall utility of the software product to the customer is therefore

U = Sum( ui )

This tells you to keep adding features to the software to increase its utility, end of story. Not very interesting.

But software is not quite so simple as this. Each feature has not only a utility, but also a cost — ci. This is the cost to the end user—not the developer—of using or deciding not to use the feature. The cost may include the time taken to find out about a feature, deciding how to use it, whether it is the right feature to use (among the options available), difficulty exploring and evaluating the feature, and so on.

With this cost added in, the overall utility of the software product to the end user is

U = Sum( ui – ci )

Let’s keep things really simple, and assume that all features have the same utility (u). We can’t do this for the cost though: the cost of using a feature inevitably depends on the total number of features in the product. This may be because it takes longer to find information about a particular feature in the documentation or user interface, or because it is more difficult to decide if this is the right feature to use among those that are present, or a host of other reasons (more on this below). So the cost of finding out about a feature is (c * n).

The overall utility of the software is then

U = Sum( u – c * n )

or, as all the terms are equal,

U = n ( u – c * n )

Which looks like this:

Image1_1

The utility function increases up to a maximum at n = ( u / 2 c ), and then decreases: that is, beyond a certain point, adding new features actually makes the software less useful. In other words, software is vulnerable to featuritis: it can become so complex that its complexity makes it useless. It may contain useful features, but finding these needles in the haystack of the product is frustratingly difficult. Experience tells us that this is a reasonable, if not surprising, result.

What is more, adding features that are useful in and of themselves can still introduce featuritis. For a feature to be useful, its utility (u) must be greater than the cost of finding out about it (c * n). So using this model, features will be added to the software until the following condition is met

n = u / c.

which is where the line crosses the x axis. The maximum utility of the software occurs at half this value (n = u / 2 c): if we continue to add features until the last feature is only just useful, the overall utility of the software will be U = 0. If features were stopped at (u / 2 c), the utility would be U = ( u2 / 4 c ). Even a useful feature degrades the usability of other product features, by making them harder to use (increasing the cost of using them).

It follows that checklist driven software development will lead to poor software. Checklists are simply lists of useful features, without any consideration of the costs they introduce to the customer. A longer checklist is often assumed to be intrinsically better than a short checklist, but we have just seen that this may not be so.

What it is about this very simple model that produces these results? The key assumption is that features have independent utilities, but that the costs of using features are not independent. In economists’ jargon, features exert a negative externality on each other. Is there a reason to think this might be true?

The independence of feature utility is simply a matter of defining a feature
properly. For example, if a very common scenario requires two "features" (A and B) then the utility of feature A depends very much on whether feature B is present or not. By itself, the utility of feature A may be very low — you can’t do much with it by itself. Once feature B is present, though, feature A becomes very useful. They are not independent features. The problem here is that "features" A and B are incomplete, and so are really part of a single properly-defined "feature". The model requires that we define features in terms of tasks that customers can carry out. The model does not tell us to implement 5/6 of a feature and then not implement the last sixth because of complexity worries.

The second assumption is that the cost of using a feature is dependent on other product features. The idea of a cost of using a feature is a very general one, and is not only "how long does it take to locate this feature in the documentation?". It may also reflect the confusion and uncertainty introduced by a plethora of choices. For example, if there are multiple ways of carrying out a task, the customer must decide which way is the best. If there are other features that appear to be related, the customer must investigate those (and discard them) before deciding on a course of action. If there are prerequisite features that must be understood, the customer must learn these also. They must spend time evaluating the alternatives. It seems reasonable to assume that the cost of wisely using an appropriate feature does depend on the overall complexity (number of features) in the product. The job of user interface designers and documentation teams is, at least in part, to minimize this destructive interference between product features. Good UI and documentation can help postpone the point at which featuritis sets in, but can’t hold it back for ever.

Of course, while simplified models like this might help us watch out for certain kinds of trap, they can’t help us decide which specific features to include, and which to discard. Also, there is not much to be gained from trying to pinpoint the particular u and c associated with features or products. Despite the equations, it is a qualitative model and cannot be easily quantified in a useful manner. Finally, while it is certainly true that ui and ci are far from constant in any product, there is probably not a lot to be gained by trying to refine their representation. As soon as models like this are made more complex, the result is a very open-ended and conditional prediction. Building fine software products can’t be reduced to equations.

But I do think the central ideas are broadly correct: complexity does influences cost and utility in different ways, software does tend to become overly complex. and—most importantly—asking "is this a useful feature?" is not the right way to develop good products.