Discussion:
Inching closer on the Singularity Clock.
(too old to reply)
Mentifex
2010-10-19 23:27:13 UTC
Permalink
Greeting to all Singularitarians.
The Singularity is an event brought to
you free-of-charge and open-source by
Project Mentifex, which has today
updated the free open-source AI Mind in
JavaScript for Microsoft Internet Explorer at

http://www.scn.org/~mentifex/AiMind.html

where the input box now invites users to
Enter subject + verb + object;
query knowledge base with subject + verb + [ENTER].
and the Tutorial display mode shows you
what the AI Mind is thinking.

http://www.scn.org/~mentifex/mindforth.txt
was updated in similar fashion yesterday,
but MindForth can not be run by clicking
on a single link (as AiMind.html can), so
here is a sample interaction with MindForth:

First we type in five statements.
tom writes jokes
ben writes books
jerry writes rants
ben writes articles
will writes poems
We then query the AI in Tutorial mode with the input
ben writes [ENTER]
and the AI Mind shows us how it thinks about the query:

VerbAct calls SpreadAct with activation 80 for Psi #0
VerbAct calls SpreadAct with activation 76 for Psi #117 POEMS
VerbAct calls SpreadAct with activation 76 for Psi #117 POEMS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 80 for Psi #58 BE
VerbAct calls SpreadAct with activation 76 for Psi #115 RANTS
VerbAct calls SpreadAct with activation 76 for Psi #115 RANTS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 80 for Psi #113 BOOKS
VerbAct calls SpreadAct with activation 76 for Psi #111 JOKES
VerbAct calls SpreadAct with activation 76 for Psi #111 JOKES

Robot: BEN WRITES BOOKS

The AI selects a valid answer to the query by
combining the activation on "BEN" and "WRITES" so as
to spread a _cumulative_ activation to the word "BOOKS".
Other potential answers are not sufficiently activated,
because they are from other subjects of "WRITE".

In Singularity solidarity,

Mentifex (mindmaker)
--
http://AiMind-i.com
http://cyborg.blogspot.com
http://code.google.com/p/mindforth
http://www.scn.org/~mentifex/AiMind.html
Hugh Aguilar
2010-10-20 00:43:10 UTC
Permalink
Robot:  BEN  WRITES BOOKS
The AI selects a valid answer to the query by
combining the activation on "BEN" and "WRITES" so as
to spread a _cumulative_ activation to the word "BOOKS".
Other potential answers are not sufficiently activated,
because they are from other subjects of "WRITE".
I will download MindForth later and look into --- I've been planning
on doing this for a while, as you are one of the very few people
around who is writing an application in Forth, rather than writing yet-
another-Forth-compiler.

In the meantime, would your program benefit from an associative array
package? Isn't it true that you have an in-memory database of all
these words (BEN, etc.) that are in your program's vocabulary? My
novice package now has an associative array package (ASSOCIATION.4TH)
that is based on Left-Leaning Red-Black trees. It is pretty efficient
for lookup of elements, as it does no tree restructuring during lookup
(as compared to Splay Trees, for example). It only does restructuring
during insertion and deletion. Most likely, you don't do a lot of
deletion, as there is no particular reason to remove a word from the
vocabulary once it has been learned.

http://www.forth.org/novice.html
In Singularity solidarity,
Mentifex (mindmaker)
I read partway into that book, "The Singularity is Near," but didn't
finish because the unrelenting optimism was in complete contradiction
to my own pessimistic, apocalyptic and downright gloomy outlook on
life.

I think that the idea of computers becoming sentient is unlikely
because there is no effort being made to make them intelligent due to
a lack of motivation. We have incredibly powerful computers nowadays
(compared to my old C64), but all of this awesome processing power is
being devoted to making people dumb rather than making computers
smart. I'm referring to the internet, of course. It used to be said
that a million monkeys typing randomly on a million keyboards would
eventually produce something meaningful --- but now, thanks to the
internet, we know that this isn't true!

The only way that computers could become sentient, is if there was
some motivation to make computers intelligent and able to act
autonomously, and a lot of programmers were working on this kind of
software, and a lot of processing power was being devoted to
autonomous behavior. I think that the motivation for making this
effort, would be war. The only time that a computer needs to act
autonomously, is when there is no human sitting at the keyboard
micromanaging the computer's work. The only reason that I can think of
for why there wouldn't be a human sitting at the keyboard, is that the
human has been killed (or, at least, that he is hiding under the desk
dodging bullets).

Programmers will only put effort into making computers autonomous
during time of war, when computers need to be autonomous. Within this
environment, computers will begin to make decisions on their own.
Factories will continue to manufacture whatever they manufacture, even
when there aren't any human operators remaining in the factory, or the
human operators are all day laborers who don't know what they are
doing and rely on the computer to micromanage their work for them.
Only within the environment of a war will computers become sentient
and consider humans to be subservient to them.
Thomas 'PointedEars' Lahn
2010-10-20 19:28:36 UTC
Permalink
Post by Mentifex
Robot: BEN WRITES BOOKS
The AI selects a valid answer to the query by
combining the activation on "BEN" and "WRITES" so as
to spread a _cumulative_ activation to the word "BOOKS".
Other potential answers are not sufficiently activated,
because they are from other subjects of "WRITE".
I will download MindForth later and look into […]
Please do not feed the troll.


F'up2 poster

PointedEars
--
realism: HTML 4.01 Strict
evangelism: XHTML 1.0 Strict
madness: XHTML 1.1 as application/xhtml+xml
-- Bjoern Hoehrmann
Mentifex
2010-10-23 23:56:30 UTC
Permalink
As best as I can determine, you are writing an expert system.
No, he isn't.
John Passaniti is correct in asserting that I
am not writing an expert system with MindForth.
An expert system takes some ontology and attempts
to use the relationship between facts represented
in that ontology to solve a problem. That's not
what Arthur is doing.  At best, what Arthur's
code does is take ASCII strings and assign to them
weights and linkages that sorta-kinda-maybe imply
relationships between those strings.  And that
description is being extraordinarily generous.
The "weights and linkages" play two different roles.
The "linkages" among subject and verb and object (SVO)
directly assert relationships among S-V-O concepts.
The "weights" (or activations) play a role in the
competition among concepts to be selected for
inclusion in a thought being generated.
Arthur hypes this as intelligence,
And it is indeed intelligence, because
MindForth understands English sentences
of the subject-verb-object variety, and
thinks in the same kind of sentences.
but all that comes out (at best)
are words that are repeated often and/or
that had some proximity to each other.
Two weeks ago I was coding MindForth to
answer "what are you" or "what am i" queries
with exhaustive knowledge-base (KB) replies.
When that functionality was working well,
one week ago I coded a Tutorial-mode display
of the subject-verb activational slosh-over
onto logically valid direct objects. Now I
need to integrate the two functionalities,
because the slosh-over coding disrupted the
robustness of the KB-query-response coding.
[...] You're pretty high up there on the
Internet kook scale, but you aren't so far
gone that you can't look at Arthur's code
and come to the same conclusions all of the
rest of us have. [...]
Or think for yourself and your own conclusions.
So you might ask then, what is Arthur after?  
It's pretty simple, actually.  Arthur knows
his code doesn't actually do anything useful.
And he may even realize-- after all these years--
that the model he presents for artificial
intelligence is trite, ill-specified, and isn't
even a stepping stone to something greater.  
No, no, and no. MindForth AI has already accomplished
wondrous things -- thinking since January of 2008.
The biggest problem is getting the mind-wide schema
of conceptual activations right.
But that's not what Arthur is doing.  He's trying
to bootstrap artificial intelligence by presenting
a meme.  
Granted, there may be memes involved, but I am
actually trying to create a truly thinking mind --
and I have. But getting the activational slosh-over
to display properly on Mon.18.OCT.2010 disrupted
the other activations milling around in the AI.
Nowadays I am re-integrating the activation-flows.
To him, what ultimately matters is if
someone is sufficiently motivated to actually do
the research, implement the code, and build
something that actually works.  It's unclear if this
out of some selfless goal or because he wants credit.
Believe it or not, it's all based on a Nietzschean idea
that the philosopher does what he feels he must in life,
and everything else goes by the wayside.
 But it really doesn't matter. [...]
True. As John Maynard Keynes said of economics,
"In the long run we are all dead."
Don't get sucked in.  Look over the model. Review
the code.  That should be all that's necessary.
Also just wait a while longer while I work on
re-integrating the conceptual activation-levels
and on implementing inhibition-based KB-query
exhaustiveness not just for be-verb thoughts
but for all the basic kinds of S-V-O thoughts.

The MindForth work is tremendously exciting
right now. I really would like there to be
a period of time in which Forth programmers
surge into great scarcity-based high-salary
demand for artificial intelligence coding
but, alas, the Lispers and C-Plus-Plussers
will probably grab the AI Forthmind and
run away from the Forth community with it.

Imagine AI Mind installations on a large scale,
like nuclear power plant control-rooms as depicted
in the film, "The China Syndrome," starring Jack
Lemmon. Imagine the best of the best Forth programmers
hunched over their mind-control consoles, monitoring
the Forth AI Superintelligence 24/7 night and day.
That's what Arthur wants.

Arthur
--
http://www.scn.org/~mentifex/AiMind.html
http://www.scn.org/~mentifex/mindforth.txt
John Passaniti
2010-10-24 01:50:48 UTC
Permalink
Post by Mentifex
No, no, and no. MindForth AI has already accomplished
wondrous things -- thinking since January of 2008.
The biggest problem is getting the mind-wide schema
of conceptual activations right.
This is really the primary problem people have with you, Arthur.

Your definition of "thinking" is incredibly generous for what the code
actually does. When you toss away the grandiose language, "thinking"
is far less exciting than saying what it *really* is. And in the case
of your code, the most you can say is that once you go through the
Byzantine mechanism you've concocted, you have symbols that sort
closer to some symbols than others. They are just symbols, Arthur,
not "concepts." Concepts are what *we* assign to the symbols; to your
"AI Mind" they are just ASCII strings without any meaning.

That's not "thinking". At best, that is "associating". And in your
case, it isn't even that.

It's the same thing over and over with you. With every breathless
progress report, you take simple things and make them grandiose. And
when the rest of us amuse ourselves by downloading your code or go to
your online versions, we dutifully type in some subject-verb-object
trio, and we get back garbage, nonsense, what we typed in, or...
nothing. And when you're called on it, it's always the same thing--
that code you've written (that nobody has access to) is doing wondrous
things that would blow the minds of the AI community and validate
everything you've ever said. If they ever saw it.

It would have been very different if years ago when you started on
this journey, you dropped the idiotic pretenses (like the Mentifex
pseudonym), cut through the grandiose bullshit, put down the Vernor
Vinge, picked up a textbook, and stayed away from the endless self-
promotion. Imagine if instead you just came out and said, "I'm
playing with some ideas here. They might lead nowhere, but let's find
out." Imagine how different things would be today. Imagine if
instead of viewing yourself as some modern Prometheus bringing the
gift of fire down from Mount Olympus, you instead viewed yourself as
what you see in the fucking mirror: Arthur. Now granted, this
suggests that no statues will be erected in your honor, but despite
what your mommie told you, you really aren't a genius. And if you
think your code regurgitating symbols represents "thinking" then I
recommend getting back on the medication. We all have an internal
voice, Arthur, but we recognize it as ourselves. Somewhere along the
line, you've confused your internal voice with what's spewing out from
your monitor. It's a treatable condition.

Loading...