Dmitry Kobak (kobak) wrote,
Dmitry Kobak
kobak

QFSS 2007

What follows is a lengthy story about Quantum Foundations Summer School that was held at Perimeter Institute in the end of August and that I was lucky to participate into. I’ve done a quick search and found out that (surprisingly) nobody has blogged about that event; that was the main reason to write all this and to actually do that in English. I’m going to send a link to this post to all the students who attended QFSS, and would appreciate any comments or corrections. Any discussions and remarks are very welcome and encouraged!

All the talks can be found video recorded on the great pirsa.org website, where all the lectures ever given at PI are available.

All in all the school was very interesting, PI appeared to be a wonderful place, the Black Hole bistro served terrific meals and I managed to visit Niagara Falls that were very impressing. Many thanks to everybody who organized that school and who made it completely free.


 

Jeremy Butterfield

It was very nice to give a first talk to a philosopher and not a physicist. Butterfield spoke about the measurement problem, listed the widely proposed solutions and then talked a bit about the difference between proper and improper density matrices (in order to clarify why the decoherence process by itself doesn’t solve the problem). I learned nothing new, but it was a great pleasure to listen to Jeremy. I just love the British pronunciation (especially that of a philosophy professor) and a careful philosophic style of argumentation and speech.

But nothing to discuss here.

 

Robert Spekkens

Spekkens was talking about the contextuality of quantum theory. Usually the notion of contextuality is defined in the terms of some hidden variable theory (that is possibly underlying quantum mechanics). The famous Kochen-Specker-Bell theorem states that noncontextual hidden variable theory is impossible (when the dimension of Hilbert space is greater than 3), i. e. quantum mechanics is contextual. There are several very nice and ingenious ways of proof, Spekkens has demonstrated the argument by Cabello. Now the question that he was concerned with is how to broaden the notion of contextuality. He proposed two refined notions of contextuality: preparation and measurement contextualities, defined in a purely operational terms (i. e. what is and what is not possibly distinguishable by measurements). Then he proved that QM is preparation contextual.

All that is interesting, but I would also like to write down two general remarks that Rob has made. First: in what sense contextuality is mysterious (and he agrees that it really is)? In the same sense as with Bell’s nonlocality. Maybe there is an underlying hidden variables theory and it is nonlocal. Fine. But why then QM is (in a sense) local? It is this interplay between the «nice» properties of QM and «weird» properties of any possible underlying hidden variable theory that seems mysterious.

The second remark is about the meaning of the state vector |ψ>. This question is of course the central in the whole issue of interpreting QM, and was discussed many times during the QFSS. Robert’s opinion is rather unusual: he hopes to construct an epistemic hidden variable theory. «Epistemic» means that the state vector represents only our knowledge and does not exist in reality. Spekkens is one of the proponents of this approach (see his toy theory, for example), but at the same time believes that there might be some hidden variables down there (and |ψ> is going to be our knowledge about them).

Last thing here: Robert is writing a textbook about quantum foundations, titled «Quantum puzzle», and told me that maybe they are going to finish it in 2008. That would be great, since the lack of a general and unbiased book devoted to the whole subject of quantum foundations is in my opinion a great pity.

What I would like to understand better: the mentioned toy theory paper.

 

Christoper Fuchs

For me that was one of the most interesting and thought-provoking speakers, though sometimes it seemed to me that the more I talked to him the less I understood his views. It was also almost the only one of the speakers several of whose papers I have read before the QFSS. Chris is well-known for his ultimate epistemic views (he calls himself one of the three existing quantum Bayesians, the other two being his usual co-authors). The general claim is that a quantum state is nothing but the degree of belief of some observer. This view kind of dissolves the measurement problem: the «collapse» of a state vector is then nothing more than acquiring some new information and consequently updating the believes.

The natural question arises: the information about WHAT? I was very much surprised when Chris began his lecture claiming that he actually believes in the independent reality (that there is a world «out there»), and giving a nice quote from Martin Gardner about that. Moreover, one of his favourite arguments is the following: every particular result of a quantum measurement is a surprise => the world is made from something independent from us. (There is a famous story about Samuel Johnson kicking a rock saying «I refute solipsism thus»; Chris says that a quantum «something» not only hearts as a rock does, but actually hits back).
The hypothesis that there is an external world, not dependent on human minds, made of something, is so obviously useful and so strongly confirmed by experience down through the ages that we can say without exaggerating that it is better confirmed than any other empirical hypothesis. Martin Gardner
This was a surprise for me, because the papers defending the epistemic Bayesian approach to QM usually avoid speaking about reality (and make an impression that authors think there is either no reality at all or no sense in talking about it). Now I understand why they are avoiding this topic: because they have absolutely no idea what this reality can really look like. When pressed Chris begins talking about quantum reality (he calls it «ZING») in a rather mystical way, reminding me of Leibniz with his monads. Anyway he admits that there is no answer yet; but the goal of this Bayesian program is to split the textbook formulation of QM in two parts: epistemic and ontic ones. The whole state vector business is epistemic. What remains ontic? No idea yet (actually Chris mentioned that maybe dimensionality of Hilbert space is ontic).

Going back to Chris’s talk, the most impressing part for me was when he discussed quantum no-cloning and quantum teleportation (two things usually considered in some sense «strange») from the epistemic point of view. The point is that this quantum behaviour appears to be analoguous to the purely classical behavior of the beliefs (=> the apparent mysteriousness comes only from the wrong attributing some reality status to the statevectors). I was striken by the example of belief teleportation (unfortunately I can’t find any appropriate link here).

This sounds of course very appealing. But there are many statements here that I’m still very uncomfortable with. One thing that we discussed with Chris several times was EPR-Bell experiment. There is no nonlocality here, claims Chris, because when Alice makes a measurement nothing happens in the Bob’s place. What happens is that Alice can immediately update her beliefs about Bob’s system; that means that when she contacts with it (or with Bob who has contacted with it beforehand) she is going to find out that QM predictions were correct. But to contact means to go over there, and she can’t do it faster than light. Well, fine. My question is wouldn’t Alice be surprised when she actually arrives to Bob and learns that all the correlations went perfectly well? I mean, how can it be? And Chris replied that how can Alice be surprised when things come out exactly as she’s predicted? And told me that this was exactly the point of argument between Poincare and de Finetti (not about QM but about probability theory in general). De Finetti’s article «Probabilismo» he called the most interesting piece ever written about probabilities.

The other thing is that if state vector is a degree of belief, than different observers can have different state vectors. That’s exactly what happens in «Wigner’s friend» setup. We discussed that briefly over one lunch, and Chris refused to be called a solipsist. However I still don’t understand what is the coherent picture here; do Wigner and his friend agree on whether an event has happened or not?

Finally there is the whole issue of the meaning of interference effects and of unitary evolution from the epistemic point of view. When asked by somebody during the lecture about unitary evolution, Chris said that it’s a very good point but then forgot to answer the question. I was going to ask him of it once again, but didn’t have an opportunity.

What I would like to understand better: everything! I’d like to understand if the epistemic views do really lead to some meaningful picture of QM. I’m going to reread and rethink several articles Chris gave to me (for example this one about «quantum certainty») and to find de Finetti’s «Probabilismo».

 

Wojciech Zurek

To listen (and to talk) to Wojciech is a great fun. What he talked about was of course the decoherence, that he once participated in discovering. I don’t want to talk about decoherence itself, since it’s clear that effects of decoherence do happen, that they make the quantum world look like it is classical (macroscopic coherence gets destroyed very fast) and that on the other hand they do not provide the solution to the interpretational questions (that’s the story of proper and improper mixtures etc). The more recent ideas of Wojciech are so called einselection (environment induced superselection), quantum Darwinism and existential interpretation.

Einselection is actually the result of decoherence: interaction with the environment provides a preferred basis, which is usually the pointer basis. Only these definite macroscopic states can survive after decoherence; that’s effective superselection rule. Quantum Darwinism is a slightly different concept. Wojciech notes that we usually observe something not by directly contact but by interacting with the environment. When I’m looking at a cat, I observe some small percent of photons scattered on the cat. Anybody can look at the same cat and get the same information about it, but having observed the different photons. That means that there are many (localized!) copies of the information about the cat spread out in the environment (in this case in the photon environment). This is exactly the requirement for us to perceive a cat in a classical way. The technical question now is, which states can survive this quantum Darwinian selection of copying. The answer is: the same states that survive decoherence. That’s good news.

Up to now I haven’t said anything about the interpretation. Actually I don’t quite understand the Zurek’s own position. He very carefully avoids speaking about it (there was a final slide in his presentation called «existential interpretation», but he ran out of time just on the previous one and was very happy with that). I got the impression that his position is not «shut up and calculate», but «shut up for a while and calculate». He firmly believes that quantum mechanics itself should provide an answer to interpretational questions, and attributes this approach to Everett. That’s why he for example spent much time of the talk deriving the Born’s rule (for an open quantum system) from the unitary evolution.

However he has also stated very clearly that the other idea usually attributed to Everett — actual many-worlds — seems to him much less attractive. We have also privately discussed the proper/improper density matrices question; Wojciech said that if you can’t distinguish the two, than maybe it means that there’s actually no physical difference (as it once came out with the inertial/gravitational masses).

Anyway, I haven’t heard from him what exactly «existential interpretation» is, and could not understand that from his articles (here’s a recent review, very similar to the QFSS talk) either. I talked about it during the several lunches with Robin Blume-Kohout (who may be a postdoc at PI, but I’m not sure). As I understood Robin, the main idea of the existential interpretation is that «existence» is an emergent phenomenon. Something «exist» if the information about it is spread in many copies in the environment. That means that an elementary particle usually doesn’t «exist»; nor does the Universe. On the other hand, anything that we directly observe surely «exists» just because it can be observed.

Well… I still fail to understand how all this helps to deal with the measurement problem. Probably, it has to be understood from the viewpoint of Everett’s «relative state interprertation». The problem is that I don’t understand, what the heck is that, as opposed to many-worlds interpretation. See later in the Adrian Kent section.

What I would like to understand better: if the existential interpretation clarifies anything.

By the way, Robin Blume-Kohout

We discussed one more thing with Robin: he claimed that QM is a final theory, in the sence that the Hilbert space structure is the fundamental layer of reality. That’s why he refused to admit that EPR-Bell correlations are mysterious at all: the world just works like that, like it or not. We talked for a while, whether this view contradicts the lessons one can learn from the history of physics (namely that there always appeared to be some underlying layer of understanding, that explained the laws of the previous one). We both I guess remained unpersuaded.

 

Adrian Kent

Adrian was speaking about MWI, but failed completely to persuade anybody that this approach is even meaningful. He did the good job in making people think that MWI is complete gibberish, so that they began wondering why he spent two hours presenting some rather elaborate arguments against it.

What really interests me about many-worlds is something completely different. It seems (correct me if I’m wrong) that in the original Everrett 1957 paper there were no many worlds at all. The original approach was called «relative state interprertation», and it was De Witt only in 1971 who introduced (?) and popularised (that’s for sure) all that splitting worlds stuff. According to Stanford Encylopedia, it’s absolutely unclear what Everett himself means («The problem is that there is a gap in Everett’s exposition between what he sets out to explain and what he ultimately ends up saying… it is unclear exactly how Everett intends to explain an observer’s determinate measurement records…»).

Now, my question is what is the connection between the Carlo Rovelli’s relational interpretation and the Everett’s relative state interpretation (note the similarity in names; also, I have seen the direct claims that RQM is not very different from MWI). According to the quote above the latter is not a meaningful concept. Is the former more meaningful? I’m a little bit lost in all this.

We have by the may met in PI with Matteo Smerlak — a graduate student from France who wrote with Rovelli an article about «Relational EPR», known for several passages about the elephants («It is clear that everybody sees the same elephant. More precisely: everybody hears everybody else stating that they see the same elephant they see»). But we spent more time with Matteo playing kicker and ping-pong than discussing QM.

What I would like to do: to watch the video recorded lectures given at PI during the «Everett at 50» conference, that just took place in late September. Starring Chris Fuchs, Davide Wallace, Max Tegmark, Simon Saunders and others. Must be interesting.

 

Sandu Popescu

From the beginning Sandu declared that he was not going to talk about any interpretational issues. He was talking about QM itself (that he evidently finds more important): on the first lecture about so called «modular variables» and on the second one about pre and post selection and so called «weak measurements». I can confess that I was absolutely unaware about both of these things before. Both subjects by the way are covered in a recent book by Aharonov and Rorhlich called «Quantum Paradoxes», that in Sandu’s opinion is «the most important book in QM after founding fathers». I spent some time looking through this book in the PI’s library, and it really looked interesting.

Unfortunately both subjects are rather technical, in the sense that it’s hard to talk about them without using formulas, so I won’ try. After the second lecture some people began arguing with Popescu about the so called «three box» paradox of Aharonov: they claimed that it’s not a paradox at all, because the same features can be found in classical physics as well. As far as I know, the consensus wasn’t found.

 

Ben Schumacher

This was the only talk devoted to quantum information and I liked it very much, but will probably skip it here, since it was not dealing with the foundational problems. In the first part Ben was talking about CP-maps and proved the (weakened?) Stinespring’s theorem (any CP-map can be realised as a unitary of evolution of the bigger system). In the second part he talked about the information flow in different quantum gates.

 

Anthony Leggett

Leggett has got a Nobel prize in 2003 together with Ginzburg and Abrikosov for their investigations in superfluids/supersonductors (which they have done in the very different time, starting with Ginzburg in the 1930-s!). Anyway, to cut the long story short Leggett doesn’t believe that QM holds on all levels and thinks that it must break down according to some collapse scenario (like GRW). He talked mainly about the experiments that could be done in the near future to falsify either such «macrorealistic» theories or QM.

I don’t want to go into details here (especially because I haven’t understood them very well). Though I have done my masters thesis exploring possible relativistic generalisations of GRW theory, I very much doubt now that something like this would appear to be a right solution to the problems of QM.

 

Lee Smolin

Lee was talking about the connections between quantum foundations and quantum gravity. In the first part of his talk he listed the problems arising when you try to apply QM to the whole Universe. (And he started by telling about Alain Connes who once said that every time somebody mentions the Universe everybody must stand up.) The problems are many: what is a measurement? what is probability? does the separation of physics into dynamical laws plus a state space representing different initial conditions (here I quote) make sense for a theory of the whole Universe which by definition occurs only once? etc. All this sounded a bit discouraging for me, because it is unclear even if these questions are meaningful… they are evidently too big. The second part of the talk was devoted to the clues we can get from quantum gravity, but it was so disappointedly speculative that I leave this out.

In the end Lee spoke about his personal views, and they are as follows. Quantum foundations are in the heart of unifying physics and quantizing gravity. On the other hand quantum foundations should not be done separately.

On this pathetic note I’m finally finishing.
(I’ve been writing this post for more than a month I guess. Can’t see it any more.)
Tags: science
Subscribe

Recent Posts from This Journal

  • хиатус 2

    А интересно, все те люди, которые годами говорили, что потепление остановилось в 1998, -- они уже переключились и начали говорить, что потепление…

  • миллион в день

    Пока суть да дело, в Германии начали прививать по миллиону человек в день (~1.2% населения). В других странах ЕС процент похожий, т.к. вакцины по ЕС…

  • введение в обучение 2

    В продолжение https://kobak.livejournal.com/116294.html. Курс закончился, все лекции доступны на Ютьюбе:…

  • Post a new comment

    Error

    default userpic

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 0 comments