|
Bilderberg.org the view from the top of the pyramid of power
|
View previous topic :: View next topic |
Author |
Message |
TonyGosling Site Admin
Joined: 26 Jul 2006 Posts: 1416 Location: St. Pauls, Bristol, UK
|
Posted: Wed Oct 09, 2019 12:57 am Post subject: Is Google's Quantum Computing really about killing privacy? |
|
|
Google Achieves ‘Quantum Supremacy’ That Will Soon Break All Encryptions
BY CHRISS STREET
5 Comments September 25, 2019 Updated: September 25, 2019 Share
News Analysis
https://www.theepochtimes.com/google-achieves-quantum-supremacy-that-will-soon-break-all-encryptions_3096063.html
Bitcoin plunged by 15 percent as the tech world begins to realize that Google achieving “quantum supremacy” computing threatens all financial and military cyber-security.
TradingView data reported on Sept. 24 that Bitcoin crypto-currency on the Binance exchange crashed from $9,352.89 to about $7,800 in less than an hour trading, before closing at about $8,568. The loss of confidence follows the release of technical reports that “quantum supremacy” computing will soon be able to hack 256-bit encryptions that are invulnerable to traditional supercomputer brute force attacks.
Google’s efforts over the last decade to develop a ‘Super Intellect System’ focused on advancing artificial intelligence (AI), quantum computing and humanoid robotics. Many observers have referred to the effort as “Skynet,” from the 1984 movie: “The Terminator.”
NASA published a scientific paper stating Google achieved “quantum supremacy” with a 53-qubit quantum computer. The device takes just 200 seconds to complete a computing task that would normally require 10,000 years on a new supercomputer.
Although a 53-qubit quantum computer can break any 53-bit cryptography in a few seconds, Bitcoin transactions are protected by 256-bit encryption. But Fortune reported that Google quantum computer qubits will double at least every two years to over 100 qubits by 2020, and then defeat all crypto-currencies in 2022 with over 400 qubits.
The Defense Information Systems Agency (DISA) disclosed in March that quantum computers will eventually expand their qubits into the 512, 1024 and 2048 range, rendering the highest levels of current U.S. military encryptions obsolete. DISA is relying on the Department of Defense’s ‘Other Transaction Authority’ to expedite issuing requests for whitepapers on potential cybersecurity encryption models that cannot be hacked by quantum computers.
Quantum computing leapfrogs the limitations of traditional computing where calculations are done one at a time with the two binary digits (bits) of “0” or “1” that turn on and off electronic current flowing through transistors. But quantum superposition subatomic particles can exist in two states at once, expanding the “entangled” digits into qubits.
IBM CIO Fletcher Previn speaking at the Baltimore AFCEA TechNet Cyber conference in March described the quantum computing challenge: “It’s a completely different approach to computing than counting, which is the basis of all current computing. It’s possible we’ve been programming computers the wrong way for the last X number of years. Quantum is a much closer approximation to how nature figures things out.”
Chinese leader Xi Jinping demanded the complete modernization of the People’s Liberation Army (PLA) by 2035 and China’s transition to a major military power by 2050. To support the effort, China’s 2019 military spending grew by 7.5 per cent. But its PLA dominated technological research budget grew by 13.4 percent, with special focus on integration of AI, quantum computing, humanoid robots.
It was assumed in early 2018 that China would be the first to achieve quantum supremacy after Chinese physicists claimed to set a quantum computing record by achieving 18-qubit entanglement, while still being able to control each qubit. But China has not made any comment regarding quantum computing since Google disclosed getting there first.
Demonstrating the accelerating speed of disruptive technical change, D-Wave—which provides exotic hardware to Google and other research organizations—announced on Sept. 24 that the company is offering a 2048-qubit quantum computer called the “D-Wave 2000Q” platform.
Chriss Street is an expert in macroeconomics, technology, and national security. He has served as CEO of several companies and is an active writer with more than 1,500 publications. He also regularly provides strategy lectures to graduate students at top Southern California universities. _________________ http://www.radio4all.net/index.php/contributor/2149
Secret Rulers https://www.youtube.com/watch?v=h0p-e2ng0SI
http://www.thisweek.org.uk
http://www.dialectradio.co.uk
http://www.911forum.org.uk |
|
Back to top |
|
|
TonyGosling Site Admin
Joined: 26 Jul 2006 Posts: 1416 Location: St. Pauls, Bristol, UK
|
Posted: Thu Jan 30, 2020 11:58 pm Post subject: |
|
|
Quantum computing as a field is obvious bullshit
Posted in non-standard computer architectures, physics by Scott Locklin on January 15, 2019
https://scottlocklin.wordpress.com/2019/01/15/quantum-computing-as-a-field-is-obvious-bullshit/
I remember spotting the quantum computing trend when I was a larval physics nerdling. I figured maybe I could get in on the chuckwagon if my dissertation project didn’t work out in a big way (it didn’t). I managed to get myself invited to a Gordon conference, and have giant leather bound notebooks filled with theoretical scribblings containing material for 2-3 papers in them. I wasn’t real confident in my results, and I couldn’t figure out a way to turn them into something practical involving matter, so I happily matriculated to better things in the world of business.
When I say Quantum Computing is a bullshit field, I don’t mean everything in the field is bullshit, though to first order, this appears to be approximately true. I don’t have a mathematical proof that Quantum Computing isn’t at least theoretically possible. I also do not have a mathematical proof that we can make the artificial bacteria of K. Eric Drexler’s nanotech fantasies. Yet, I know both fields are bullshit. Both fields involve forming new kinds of matter that we haven’t the slightest idea how to construct. Neither field has a sane ‘first step’ to make their large claims true.
Drexler and the “nanotechnologists” who followed him, they assume because we know about the Schroedinger equation we can make artificial forms of life out of arbitrary forms of matter. This is nonsense; nobody understands enough about matter in detail or life in particular to do this. There are also reasonable thermodynamic, chemical and physical arguments against this sort of thing. I have opined on this at length, and at this point, I am so obviously correct on the nanotech front, there is nobody left to argue with me. A generation of people who probably would have made first rate chemists or materials scientists wasted their early, creative careers following this over hyped and completely worthless woo. Billions of dollars squandered down a rat hole of rubbish and wishful thinking. Legal wankers wrote legal reviews of regulatory regimes to protect us from this nonexistent technology. We even had congressional hearings on this nonsense topic back in 2003 and again in 2005 (and probably some other times I forgot about). Russians built a nanotech park to cash in on the nanopocalyptic trillion dollar nanotech economy which was supposed to happen by now.
Similarly, “quantum computing” enthusiasts expect you to overlook the fact that they haven’t a clue as to how to build and manipulate quantum coherent forms of matter necessary to achieve quantum computation. A quantum computer capable of truly factoring the number 21 is missing in action. In fact, the factoring of the number 15 into 3 and 5 is a bit of a parlour trick, as they design the experiment while knowing the answer, thus leaving out the gates required if we didn’t know how to factor 15. The actual number of gates needed to factor a n-bit number is 72 * n^3; so for 15, it’s 4 bits, 4608 gates; not happening any time soon.
It’s been almost 25 years since Peter Shor had his big idea, and we are no closer to factoring large numbers than we were … 15 years ago when we were also able to kinda sorta vaguely factor the number 15 using NMR ‘quantum computers.’
I had this conversation talking with a pal at … a nice restaurant near one of America’s great centers of learning. Our waiter was amazed and shared with us the fact that he had done a Ph.D. thesis on the subject of quantum computing. My pal was convinced by this that my skepticism is justified; in fact he accused me of arranging this. I didn’t, but am motivated to write to prevent future Ivy League Ph.D. level talent having to make a living by bringing a couple of finance nerds their steaks.
In 2010, I laid out an argument against quantum computing as a field based on the fact that no observable progress has taken place. That argument still stands. No observable progress has taken place. However, 8 years is a very long time. Ph.D. dissertations have been achieved, and many of these people have gone on to careers … some of which involve bringing people like me delicious steaks. Hundreds of quantum computing charlatans achieved tenure in that period of time. According to google scholar a half million papers have been written on the subject since then.
QC-screenshot
There are now three major .com firms funding quantum computing efforts; IBM, Google and Microsoft. There is at least one YC/Andreesen backed startup I know of. Of course there is also dwave, who has somehow managed to exist since 1999; almost 20 years, without actually delivering something usefully quantum or computing. How many millions have been flushed down the toilet by these turds? How many millions which could have been used building, say, ordinary analog or stochastic computers which do useful things? None of these have delivered a useful quantum computer which has even one usefully error corrected qubit. I suppose I shed not too many tears for the money spent on these efforts; in my ideal world, several companies on that list would be broken up or forced to fund Bell Labs moonshot efforts anyway, and most venture capitalists are frauds who deserve to be parted with their money. I do feel sad for the number of young people taken in by this quackery. You’re better off reading ancient Greek than studying a ‘technical’ subject that eventually involves bringing a public school kid like me a steak. Hell, you are better off training to become an exorcist or a feng shui practitioner than getting a Ph.D. in ‘quantum computing.’
I am an empiricist and a phenomenologist. I consider the lack of one error corrected qubit in the history of the human race to be adequate evidence that this is not a serious enough field to justify using the word ‘field.’ Most of it is frankly, a scam. Plenty of time to collect tenure and accolades before people realize this isn’t normative science or much of anything reasonable.
As I said last year
All you need do is look at history: people had working (digital) computers before Von Neumann and other theorists ever noticed them. We literally have thousands of “engineers” and “scientists” writing software and doing “research” on a machine that nobody knows how to build. People dedicate their careers to a subject which doesn’t exist in the corporeal world. There isn’t a word for this type of intellectual flatulence other than the overloaded term “fraud,” but there should be.
“Computer scientists” have gotten involved in this chuckwagon. They have added approximately nothing to our knowledge of the subject, and as far as I can tell, their educational backgrounds preclude them ever doing so. “Computer scientists” haven’t had proper didactics in learning quantum mechanics, and virtually none of them have ever done anything as practical as fiddled with an op-amp, built an AM radio or noticed how noise works in the corporeal world.
Such towering sperg-lords actually think that the only problems with quantum computing are engineering problems. When I read things like this, I can hear them muttering mere engineering problems. Let’s say, for the sake of argument this were true. The SR-71 was technically a mere engineering problem after the Bernoulli effect was explicated in 1738. Would it be reasonable to have a hundred or a thousand people writing flight plans for the SR-71 as a profession in 1760? No.
A reasonable thing for a 1760s scientist to do is invent materials making a heavier than air craft possible. Maybe fool around with kites and steam engines. And even then … there needed to be several important breakthroughs in metallurgy (titanium wasn’t discovered until 1791), mining, a functioning petrochemical industry, formalized and practical thermodynamics, a unified field theory of electromagnetism, chemistry, optics, manufacturing and arguably quantum mechanics, information theory, operations research and a whole bunch of other stuff which was unimaginable in the 1760s. In fact, of course the SR-71 itself was completely unimaginable back then. That’s the point.
its just engineering!
its just engineering!
Physicists used to be serious and bloody minded people who understood reality by doing experiments. Somehow this sort of bloody minded seriousness has faded out into a tower of wanking theorists who only occasionally have anything to do with actual matter. I trace the disease to the rise of the “meritocracy” out of cow colleges in the 1960s. The post WW-2 neoliberal idea was that geniuses like Einstein could be mass produced out of peasants using agricultural schools. The reality is, the peasants are still peasants, and the total number of Einsteins in the world, or even merely serious thinkers about physics is probably something like a fixed number. It’s really easy, though, to create a bunch of crackpot narcissists who have the egos of Einstein without the exceptional work output. All you need to do there is teach them how to do some impressive looking mathematical Cargo Cult science, and keep their “results” away from any practical men doing experiments.
The manufacture of a large caste of such boobs has made any real progress in physics impossible without killing off a few generations of them. The vast, looming, important questions of physics; the kinds that a once in a lifetime physicist might answer -those haven’t budged since the early 60s. John Horgan wrote a book observing that science (physics in particular) has pretty much ended any observable forward progress since the time of cow collitches. He also noticed that instead of making progress down fruitful lanes or improving detailed knowledge of important areas, most develop enthusiasms for the latest non-experimental wank fest; complexity theory, network theory, noodle theory. He thinks it’s because it’s too difficult to make further progress. I think it’s because the craft is now overrun with corrupt welfare queens who are play-acting cargo cultists.
Physicists worthy of the name are freebooters; Vikings of the Mind, intellectual adventurers who torture nature into giving up its secrets and risk their reputation in the real world. Modern physicists are … careerist ding dongs who grub out a meagre living sucking on the government teat, working their social networks, giving their friends reach arounds and doing PR to make themselves look like they’re working on something important. It is terrible and sad what happened to the king of sciences. While there are honest and productive physicists, the mainstream of it is lost, possibly forever to a caste of grifters and apple polishing dingbats.
But when a subject which claims to be a technology, which lacks even the rudiments of experiment which may one day make it into a technology, you can know with absolute certainty that this ‘technology’ is total nonsense. Quantum computing is less physical than the engineering of interstellar spacecraft; we at least have plausible physical mechanisms to achieve interstellar space flight.
We’re reaching peak quantum computing hyperbole. According to a dimwit at the Atlantic, quantum computing will end free will. According to another one at Forbes, “the quantum computing apocalypse is immanent.” Rachel Gutman and Schlomo Dolev know about as much about quantum computing as I do about 12th century Talmudic studies, which is to say, absolutely nothing. They, however, think they know smart people who tell them that this is important: they’ve achieved the perfect human informational centipede. This is unquestionably the right time to go short.
Even the national academy of sciences has taken note that there might be a problem here. They put together 13 actual quantum computing experts who poured cold water on all the hype. They wrote a 200 page review article on the topic, pointing out that even with the most optimistic projections, RSA is safe for another couple of decades, and that there are huge gaps on our knowledge of how to build anything usefully quantum computing. And of course, they also pointed out if QC doesn’t start solving some problems which are interesting to … somebody, the funding is very likely to dry up. Ha, ha; yes, I’ll have some pepper on that steak.
There are several reasonable arguments against any quantum computing of the interesting kind (aka can demonstrate supremacy on a useful problem) ever having a physical embodiment.
One of the better arguments is akin to that against P=NP. No, not the argument that “if there was such a proof someone would have come up with it by now” -but that one is also in full effect. In principle, classical analog computers can solve NP-hard problems in P time. You can google around on the “downhill principle” or look at the work on Analog super-Turing architectures by people like Hava Siegelmann. It’s old stuff, and most sane people realize this isn’t really physical, because matter isn’t infinitely continuous. If you can encode a real/continuous number into the physical world somehow, P=NP using a protractor or soap-bubble. For whatever reasons, most complexity theorists understand this, and know that protractor P=NP isn’t physical. Somehow quantum computing gets a pass, I guess because they’ve never attempted to measure anything in the physical world beyond the complexity of using a protractor.
In order to build a quantum computer, you need to control each qubit, which is a continuous value, not a binary value, in its initial state and subsequent states precisely enough to run the calculation backwards. When people do their calculations ‘proving’ the efficiency of quantum computers, this is treated as an engineering detail. There are strong assertions by numerous people that quantum error correction (which, I will remind everyone, hasn’t been usefully implemented in actual matter by anyone -that’s the only kind of proof that matters here) basically pushes the analog requirement for perfection to the initialization step, or subsumes it in some other place where it can’t exist. Let’s assume for the moment that this isn’t the case.
Putting this a different way, for an N-qubit computer, you need to control, transform, and read out 2^N complex (as in complex numbers) amplitudes of N-qubit quantum computers to a very high degree of precision. Even considering an analog computer with N oscillators which must be precisely initialized, precisely controlled, transformed and individually read out, to the point where you could reverse the computation by running the oscillators through the computation backwards; this is an extremely challenging task. The quantum version is exponentially more difficult.
Making it even more concrete; if we encode the polarization state of a photon as a qubit, how do we perfectly align the polarizers between two qubits? How do we align them for N qubits? How do we align the polarization direction with the gates? This isn’t some theoretical gobbledeygook; when it comes time to build something in physical reality, physical alignments matter, a lot. Ask me how I know. You can go amuse yourself and try to build a simple quantum computer with a couple of hard coded gates using beamsplitters and polarization states of photos. It’s known to be perfectly possible and even has a rather sad wikipedia page. I can make quantum polarization-state entangled photons all day; any fool with a laser and a KDP crystal can do this, yet somehow nobody bothers sticking some beamsplitters on a breadboard and making a quantum computer. How come? Well, one guy recently did it: got two whole qubits. You can go read about this *cough* promising new idea here, or if you are someone who doesn’t understand matter here.
FWIIW in early days of this idea, it was noticed that the growth in the number of components needed was exponential in the number of qubits. Well, this shouldn’t be a surprise: the growth in the number of states in a quantum computer is also exponential in the number of qubits. That’s both the ‘interesting thing’ and ‘the problem.’ The ‘interesting thing’ because an exponential number of states, if possible to trivially manipulate, allows for a large speedup in calculations. ‘The problem’ because manipulating an exponential number of states is not something anyone really knows how to do.
The problem doesn’t go away if you use spins of electrons or nuclei; which direction is spin up? Will all the physical spins be perfectly aligned in the “up” direction? Will the measurement devices agree on spin-up? Do all the gates agree on spin-up? In the world of matter, of course they won’t; you will have a projection. That projection is in effect, correlated noise, and correlated noise destroys quantum computation in an irrecoverable way. Even the quantum error correction people understand this, though for some reason people don’t worry about it too much. If they are honest in their lack of worry, this is because they’ve never fooled around with things like beamsplitters. Hey, making it have uncorrelated noise; that’s just an engineering problem right? Sort of like making artificial life out of silicon, controlled nuclear fusion power or Bussard ramjets is “just an engineering problem.”
engineering problem; easier than quantum computers
Of course at some point someone will mention quantum error correction which allows us to not have to precisely measure and transform everything. The most optimistic estimate of the required precision is something like 10^-5 for quantum error corrected computers per qubit/gate operation. This is a fairly high degree of precision. Going back to my polarization angle example; this implies all the polarizers, optical elements and gates in a complex system are aligned to 0.036 degrees. I mean, I know how to align a couple of beamsplitters and polarizers to 628 microradians, but I’m not sure I can align a few hundred thousand of them AND pockels cells and mirrors to 628 microradians of each other. Now imagine something with a realistic number of qubits for factoring large numbers; maybe 10,000 qubits, and a CPU worth of gates, say 10^10 or so of gates (an underestimate of the number needed for cracking RSA, which, mind you, is the only reason we’re having this conversation). I suppose it is possible, but I encourage any budding quantum wank^H^H^H algorithmist out there to have a go at aligning 3-4 optical elements to within this precision. There is no time limit, unless you die first, in which case “time’s up!”
This is just the most obvious engineering limitation for making sure we don’t have obviously correlated noise propagating through our quantum computer. We must also be able to prepare the initial states to within this sort of precision. Then we need to be able to measure the final states to within this sort of precision. And we have to be able to do arbitrary unitary transformations on all the qubits.
Just to interrupt you with some basic facts: the number of states we’re talking about here for a 4000 qubit computer is ~ 2^4000 states! That’s 10^1200 or so continuous variables we have to manipulate to at least one part in ten thousand. The number of protons in the universe is about 10^80. This is why a quantum computer is so powerful; you’re theoretically encoding an exponential number of states into the thing. Can anyone actually do this using a physical object? Citations needed; as far as I can tell, nothing like this has ever been done in the history of the human race. Again, interstellar space flight seems like a more achievable goal. Even Drexler’s nanotech fantasies have some precedent in the form of actually existing life forms. Yet none of these are coming any time soon either.
There are reasons to believe that quantum error correction, too isn’t even theoretically possible (examples here and here and here -this one is particularly damning). In addition to the argument above that the theorists are subsuming some actual continuous number into what is inherently a noisy and non-continuous machine made out of matter, the existence of a quantum error corrected system would mean you can make arbitrarily precise quantum measurements; effectively giving you back your exponentially precise continuous number. If you can do exponentially precise continuous numbers in a non exponential number of calculations or measurements, you can probably solve very interesting problems on a relatively simple analog computer. Let’s say, a classical one like a Toffoli gate billiard ball computer. Get to work; we know how to make a billiard ball computer work with crabs. This isn’t an example chosen at random. This is the kind of argument allegedly serious people submit for quantum computation involving matter. Hey man, not using crabs is just an engineering problem muh Church Turing warble murble.
Smurfs will come back to me with the press releases of Google and IBM touting their latest 20 bit stacks of whatever. I am not impressed, and I don’t even consider most of these to be quantum computing in the sense that people worry about quantum supremacy and new quantum-proof public key or Zero Knowledge Proof algorithms (which more or less already exist). These cod quantum computing machines are not expanding our knowledge of anything, nor are they building towards anything for a bold new quantum supreme future; they’re not scalable, and many of them are not obviously doing anything quantum or computing.
This entire subject does nothing but eat up lives and waste careers. If I were in charge of science funding, the entire world budget for this nonsense would be below that we allocate for the development of Bussard ramjets, which are also not known to be impossible, and are a lot more cool looking.
As Dyakonov put it in his 2012 paper;
“A somewhat similar story can be traced back to the 13th century when Nasreddin Hodja made a proposal to teach his donkey to read and obtained a 10-year grant from the local Sultan. For his first report he put breadcrumbs between the pages of a big book, and demonstrated the donkey turning the pages with his hoofs. This was a promising first step in the right direction. Nasreddin was a wise but simple man, so when asked by friends how he hopes to accomplish his goal, he answered: “My dear fellows, before ten years are up, either I will die or the Sultan will die. Or else, the donkey will die.”
Had he the modern degree of sophistication, he could say, first, that there is no theorem forbidding donkeys to read. And, since this does not contradict any known fundamental principles, the failure to achieve this goal would reveal new laws of Nature. So, it is a win-win strategy: either the donkey learns to read, or new laws will be discovered.”
Further reading on the topic:
Dyakonov’s recent IEEE popsci article on the subject (his papers are the best review articles of why all this is silly):
https://spectrum.ieee.org/computing/hardware/the-case-against-quantum-computing
IEEE precis on the NAS report:
https://spectrum.ieee.org/tech-talk/computing/hardware/the-us-national-academies-reports-on-the-prospects-for-quantum-computing (summary: not good)
Amusing blog from 11 years ago noting the utter lack of progress in this subject:
http://emergentchaos.com/archives/2008/03/quantum-progress.html
“To factor a 4096-bit number, you need 72*40963 or 4,947,802,324,992 quantum gates. Lets just round that up to an even 5 trillion. Five trillion is a big number. ”
Aaronson’s articles of faith (I personally found them literal laffin’ out loud funny, though I am sure he is in perfect earnest):
https://www.scottaaronson.com/blog/?p=124 _________________ http://www.radio4all.net/index.php/contributor/2149
Secret Rulers https://www.youtube.com/watch?v=h0p-e2ng0SI
http://www.thisweek.org.uk
http://www.dialectradio.co.uk
http://www.911forum.org.uk |
|
Back to top |
|
|
TonyGosling Site Admin
Joined: 26 Jul 2006 Posts: 1416 Location: St. Pauls, Bristol, UK
|
Posted: Thu Jan 30, 2020 11:59 pm Post subject: |
|
|
Rivals rubbish Google’s claim of quantum supremacy
Researchers take aim at tech company for declaring computing milestone
https://www.ft.com/content/cede11e0-dd51-11e9-9743-db5a370481bc
Google claims its quantum chip handled a calculation in 3 minutes 20 seconds that would take the world’s most powerful supercomputer 10,000 years to complete © FT montage / Getty / Dreamstime
Richard Waters in San Francisco SEPTEMBER 23 2019Print this page
A significant scientific breakthrough that represents the dawn of computing’s second era? Or a head-turning piece of research with little practical application?
Researchers at Google say they have built the first quantum computer that can perform a calculation far beyond the reach of even the most powerful machine built along traditional, or “classical”, lines — a long-awaited feat known as “quantum supremacy”.
The company’s researchers call it a “milestone” that “heralds the advent of a much-anticipated computing paradigm”. By harnessing the quantum effects exhibited by subatomic particles, such systems have the potential — at least in theory — to leap far ahead of today’s supercomputers.
They have demonstrated a path to scalable quantum computing. Once you have a fully error-corrected quantum computer, the sky’s the limit
Daniel Lidar, University of Southern California
But not everyone is ready to call this a turning point for computer science. Google’s claim is “indefensible — it’s just plain wrong”, said Dario Gil, head of research at IBM, one of the competitors in the race to achieve quantum computing.
While crediting some of the internet company’s technical advances, he dismisses the claim that this is a seminal moment for computing as “grandiosity”. The research is just “a laboratory experiment designed to essentially — and almost certainly exclusively — implement one very specific quantum sampling procedure with no practical applications,” he said.
Others working in the field, however, are more willing to back Google’s claims. Its research is “profound”, said Chad Rigetti, a former IBM executive who now heads a quantum computing start-up. “It’s very important for the industry to hit this milestone. It’s a big moment for humans and for science.”
Google claimed its breakthrough in a research paper headlined “Quantum supremacy using a programmable superconducting processor”. First reported by the FT, it was briefly posted on a Nasa website last week before being removed, and the company has not said when it will be formally published.
Unlike the bits in a digital computer, which register either a 1 or 0, quantum bits — known as qubits — can be both at the same time. Along with another quantum phenomenon known as entanglement, through which qubits can influence others they are not even connected to, this opens the way to systems that can handle massively more complex problems.
Part of the controversy in the computing world lies in the term quantum supremacy. Coined in 2012 by theoretical physicist John Preskill, it denotes the moment when a system built using the new technology can solve a problem that is, for all practical purposes, impossible for even the most powerful supercomputers to handle.
The term implies that, from now on, quantum computers will be in the ascendant — something that almost guarantees the kind of controversy already being stirred up by Google’s claims.
The internet company’s own researchers have worried that claiming “supremacy” would make them look arrogant, according to one person familiar with the company. They considered coining a different phrase for their achievement before falling back on the generally used term.
Google’s demonstration of supremacy was based on a narrow technical test, creating a system capable of proving that the figures produced by a random number generator were truly random. The quantum chip it built for the purpose, codenamed Sycamore, handled the calculation in 3 minutes 20 seconds. The researchers estimated that it would take the world’s most powerful supercomputer 10,000 years to reach the same result.
Behind this limited demonstration lie a series of technical breakthroughs that will have much wider application, pointing the way to full quantum computing, say experts such as Mr Rigetti, who have seen the research.
All of today’s rudimentary quantum systems suffer because of the difficulty of controlling the qubits they are based on. These only hold their quantum states for tiny fractions of a second, and correcting for the errors in such systems is the single biggest problem in building useful systems. Yet Google’s researchers claim to have already done enough to produce notable results.
According to Daniel Lidar, an engineering professor at the University of Southern California, the real breakthrough has been to greatly reduce the extent to which qubits interfere with each other — a problem known as “crosstalk”. That enabled the researchers to reach “fidelity” in their system of 0.2 per cent. Though seemingly low, this still represents “very low error rates relative to what other teams have built,” Mr Lidar said.
Google was also able to show that the errors in its system were distinct and not correlated to each other, he added — something that means the techniques of error-correction it has used will one day be applicable in far more complex systems.
“They have demonstrated a path to scalable quantum computing,” said Mr Lidar. “Once you have a fully error-corrected quantum computer, the sky’s the limit.”
According to Mr Gil of IBM, however, the Google system is a specialised piece of hardware built to handle a single problem, and is far from being a truly programmable, general-purpose computer. It is “a very specific thing to execute this very arbitrary program,” he said.
That means that Google’s supremacy demonstration is not “the start of the journey” towards full quantum computing, he added. Instead, he claimed that distinction for IBM’s own work in the field.
While Google’s research is confined to the science lab, IBM has been working with a number of companies on trying to develop the first applications for the technology. It has been using quantum systems that were not designed to demonstrate supremacy, but instead are trying to cross a lower hurdle known as “quantum advantage” — a point when the technology has practical uses that will enable it to be used in preference to classical systems on some problems.
Mr Lidar, though, says that even if Google’s system can’t be fully controlled yet, it is programmable. Matt Ocko, a venture capital investor who has backed a number of quantum-related start-ups, compared it to the “analytical engine” designed by the mathematician Charles Babbage in the early nineteen century. Though in theory a general-purpose system that can carry out different tasks, the hardware for the engine has to be set up to perform a particular calculation.
Finding ways to program the system is just one of the tasks that lie ahead and means that practical uses of quantum computing are still years away. But Mr Ocko, a partner at Data Collective in San Francisco, said the Google demonstration was still important validation for people investing in the field.
While IBM hopes to bring quantum computing into mainstream business use within a decade, Google has set its sights higher. It hopes to leap ahead to solve problems far beyond today’s computers, such as modelling protein folding or designing new materials, for instance to power more efficient solar power systems. It hopes to make the demonstration of quantum supremacy the first step on this path.
Meanwhile, in the race for the most powerful computing system, this may not be the end of the road for the world’s existing supercomputers. Experts such as Mr Lidar caution that new techniques for programming these “classical” machines could enable them to match the Google feat. But for now at least Google stands, unchallenged, on top of the computing world.
This article has been amended since publication to change the system “fidelity” from 0.1 per cent to 0.2 per cent. _________________ http://www.radio4all.net/index.php/contributor/2149
Secret Rulers https://www.youtube.com/watch?v=h0p-e2ng0SI
http://www.thisweek.org.uk
http://www.dialectradio.co.uk
http://www.911forum.org.uk |
|
Back to top |
|
|
|
|
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum
|
Powered by phpBB © 2001, 2005 phpBB Group
|