Discussion:
Solving arbitrarily large problems with a small qubit size quantum computer
(too old to reply)
e***@gmail.com
2007-05-05 01:18:52 UTC
Permalink
Hi,
This should lead to lots of PhDs and Masters degrees. Please analyze
and comment.
To the moderator: Please suggest newsgroups where this message would
be relevant.
[C.p. moderator: will attempt to keep your cross post.
Depending on thread directions you can try various other comp groups
for architecture, database, (algorithm theory might be a little weak
(c.t. tends to be a weak group), or sci.math) take as you want.]

***
This note is relevant on comp.parallel and comp.sys.super because it
is the parallel computer scientists who know how to modify algorithms
such as Shor's algorithm to run on arbitrarily large problem input
sizes ----------- and serially extract the output ---- and then
recycle the output recursively to keep obtaining more and more bits of
the output recursively.
Erach

You can serialize a quantum computer algorithm yielding superb
results.
You must have seen that http://www.dwavesys.com has produced a 16 bit
"adiabatic" quantum computer and next year is planning a 1024 bit
quantum computer. Even if you doubt dwavesys (which perhaps I do now)
consider that IBM has built a 7-bit quantum computer.
In Japan, researchers are figuring out how to use super-conductors to
make a quantum computer.

Now, how do we make a 10,000 digit factoring quantum computer out of
only 7 qubits (as IBM has made).
Or do artificial intelligence. Or do bio-informatics. Or do database
retrieval. We SERIALIZE THE OUTPUT just like human brains which work
in parallel, but communicate serially do.

FOR EXAMPLE,

1. Feed the problem and "0 output digits".
2. Get the first X output digits

Do feed-forward, recursively till all 10,000 digits are obtained.

3. Feed the problem and "first X output digits".
4. Get the second X output digits.

Repeat
5. Feed the problem and "first X output digits" and "second X output
digits".
6. Get the third X output digits.

Repeat the above steps feed-forward, recursively until all 10000
output digits are obtained.

So now, I have made a breakthrough in quantum programming and this can
be used in all fields where computers are used

Erach Irani (US Citizen, now living in Mumbai, India).
PhD Computer Science (University of Minnesota, Minneapolis, 1985 -1993)
(Specialization Artificial Intelligence)
B.Tech Computer Science (IIT Bombay, Mumbai, India) 1981-1985

--
Ted Dunning
2007-05-06 09:01:53 UTC
Permalink
I don't think so.

If you serialize the computation, then you have inherently limited the
quantum aspects of the computation to the single digit you are working
with. Between digits, you have only conventional deterministic
relationships. This means that whatever parts that you don't
serialize, you get quantum speedup (if any) while the serialization
aspects all proceed at conventional speeds. Even worse, the
serialization involves collapsing the wave function of the computer at
each time step so that you make the entire computation deterministic.

The same caveat applies to parallelization of quantum computers.
Unless the computers are able to communicate via some mechanism that
avoids wave-function collapse, then you don't get any large-scale
quantum computation.

Not only does this approach not work, but it isn't even novel. This
is a fairly standard suggestion for people to come up with when they
are first presented with the ideas behind quantum computation.

Sorry to rain on your parade. It is really exciting to have ideas
like this. The trick is to savor the euphoric moments, even with ones
that turn out not as good as they seem at first and avoid being too
depressed by the downturn that you experience when they turn out not
so good. The only way to have good ideas is to have lots of bad ones
as well (and then prune efficiently).

So don't let this discourage you!
Post by e***@gmail.com
You can serialize a quantum computer algorithm yielding superb
results.
...
Now, how do we make a 10,000 digit factoring quantum computer out of
only 7 qubits (as IBM has made).
Or do artificial intelligence. Or do bio-informatics. Or do database
retrieval. We SERIALIZE THE OUTPUT just like human brains which work
in parallel, but communicate serially do.
--
e***@gmail.com
2007-05-08 01:03:49 UTC
Permalink
Post by Ted Dunning
Sorry to rain on your parade. It is really exciting to have ideas
like this. The trick is to savor the euphoric moments, even with ones
that turn out not as good as they seem at first and avoid being too
depressed by the downturn that you experience when they turn out not
so good. The only way to have good ideas is to have lots of bad ones
as well (and then prune efficiently).
Actually, I wrote down my ideas and then thought about what I was doing.
I sent an email to Peter Shor who wrote the Shor algorithm (he's at
mit now, I had to register at
Arxiv.org to get his email address).

he did not comment on my paper but emailed back to me that CONVOLUTION
(the product of two fourier series) is useful (as I asked) in quantum
computation but not so
(I think: in factorization).

So, I'm going to think of SUDOKU now and how to play it ---- and what
graph theory algorithm it reminds me of (traveling salesman seems a
first rough guess).

Erach

--
e***@gmail.com
2007-05-09 17:00:51 UTC
Permalink
[C.p. moderator: at the request of the comp.ai moderator, I've axed
your comp.ai cross posts. From his perspective, you don't have enough AI
content in your post, and if you want this in his groups, you have to
submit to him and pass his criteria.
You are barely skimming comp.parallel's topic.]



The one digit at a time of the solution, or a one character at a time
of the solution can give SUPERB Benefits.

Until it is PROVEN that an hybrid of a finite qubit sized quantum
computer, and a DIGITAL Computer --- cannot solve a problem requiring
a large number of bits in the solution ---- you cannot ASSUME that.

Now, if I have a 100,000 digit number to obtain or a lot of text in
the answer, obtaining one digit at a time in an interaction between a
quantum computer and a digital computer requires only O(N) iterations
of the quantum computer (you can add / multiply the serial computer's
contribution).

There are Automated Theorem Prover's present (see Wikipedia) which do
proof that humans cannot understand ---- so can we build an automated
theorem prover hybrid using a quantum computer and a digital computer.

For a hint, study convolution and see how to repeatedly inject a
fourier series into the quantum computer to get the output --- one
digit or one character at a time.

Thanks,
Erach

--
Alex Colvin
2007-05-09 23:11:31 UTC
Permalink
Post by e***@gmail.com
Until it is PROVEN that an hybrid of a finite qubit sized quantum
computer, and a DIGITAL Computer --- cannot solve a problem requiring
a large number of bits in the solution ---- you cannot ASSUME that.
Well, if you compute a few qqbits at a time, you have to hald partial
results somewhere. If you store them in a quantum memory, you need qbits.
If you store them in classical bits, you've just lost all the quantum
information.

If you only need a whole lot of one-qbit results, you're fine.

Despite many popular articles, it's not enough to have a qbit be a
superposition of 0 and 1, it's necessary that the pair be a superposition
of 00 and 11, and not 01 and 10. It's the entanglement that makes this
interesting.
--
mac the naïf

--
e***@gmail.com
2007-05-21 17:07:22 UTC
Permalink
Fault-tolerant algorithms for quantum hardware computers.

I saw the hardware requirements (error-correcting) for quantum
computers. The number of quantum gates used to correct the errors for
a quantum computer seems large.

Is it not possible to construct an algorithm that is fault-tolerant
especially if the algorithm is hybrid with a digital computer.

The algorithm can use distributed computing (genetic algorithms,
neural networks) and perhaps even distributed theorem provers.

The human and animal brains produces excellent results (even when I
guess the underlying hardware is faulty).

Erach

--

Loading...