Post by MentifexThe Mentifex AI Mind in both Forth and JavaScript
uses "spreading activation" among concepts to
generate sentences of linguistic thought.
To me it seems to be clear that the process of
propagating instances of concepts is related to
thinking, or _is_ actually thinking.
In MindForth and in the JavaScript AiMind.html,
a concept in the Psi array is implemented in such
a way as to represent a long neuronal fiber holding
a concept by associatively tagging all knowledge
comprising the concept.
Das heisst, jeder Begriff besteht in einem neuronalen
Faden, und dieser Faden ist Mitglied einer Gruppe von
identischen Faeden, damit der Begriff nicht ausstirbt,
wenn eine einzige Gehirnzelle ausstirbt.
That is, each concept-fiber is part of a ganged
network of identical concept-fibers, redundantly
present so as to strengthen the resident concept.
Associative signals propagate from a noun concept-
fiber to a verb concept-fiber to an object fiber.
After a new thought is generated, the ReEntry
http://code.google.com/p/mindforth/wiki/ReEntry
module feeds the output thought back into the
http://code.google.com/p/mindforth/wiki/MindGrid
as a new nexus of associations among concepts.
In simple recall of factual knowledge, the new
thought is not different from similar old thoughts.
The point here is that the concepts do not propagate.
Only signals from concept to concept propagate.
But the success
will highly depend on the quality of the inferences.
http://code.google.com/p/mindforth/wiki/InFerence
is still a far-off goal for MindForth, because
self-referential knowledge must be implemented
before the AI may infer new knowledge from old.
- Is meaning maintained, weakened or changed at
a transition from one concept to another?
The meaning of, say, a verb is contained in the
associations which the verb makes "backwards" to
subject-concepts and "forwards" to object-concepts.
In a sentence like "Cats eat fish", the concept
of "eat" takes on meaning as more subjects and
more objects become embedded in the AiMind grid.
If robotic vision is added to the MindGrid,
recognition of visible entities fleshes out
the meaning of "eat" to include visible actions.
Usually meaning is maintained in a new thought,
especially in response to a factual query.
If a human user were to say, "My hobby eats
my money," the idea might be a new meaning.
- If there are many successors possible, which
of them are chosen?
MindForth searches backwards in time for the
most recent instances of the concepts under
discussion. The need for cumulative activation
to flush out an idea may cause extensive search.
Since there are no "successors," but only
linkages that succeed a thought or a query,
successors are not chosen in MindForth.
- When are inference threads terminated?
Although the Mentifex AI has not yet achieved
http://code.google.com/p/mindforth/wiki/InFerence
the AI Mind does engage in a
http://code.google.com/p/mindforth/wiki/MeanderingChain
of thought, which is terminated only when the
associative chain fails to find related concepts.
A question about your AI system: how do you
store knowledge? Is there a declarative database,
say, in XML or something text-based?
Knowledge in the AI Mind is stored as associative
tags linking all the concepts in a particular
idea or assertion. MindForth and AiMind.html
"comprehend" an input of new knowledge by
assigning the discernible relationships
in the idea to the constituent concepts
in the MindGrid. If a human input contains
a previously unknown word, the AiMind
creates a new concept of the unknown word.
There is not "a declarative database,"
but rather a primitive neural network
capable of adding new elements (concepts),
propagating signals from concept to concept,
and of generating thoughts under the control
of the EnCog module as a superstructure.
Regards,
Joachim
Thank you for these opportunities to enlarge
on how the artificial intelligence works.
Yesterday I received a small (US $22.00)
royalty check for the AI4U textbook, and
so I feel a duty to continue working on
the AI for the sake of AI4U purchasers.
Sincerely,
Arthur T. Murray
--
http://groups.google.com/group/comp.ai.philosophy
http://groups.google.com/group/comp.ai.nat-lang
http://groups.google.com/group/de.sci.informatik.ki
http://groups.google.com/group/comp.sys.super