Otopy's AI Search Engine!

Otopy's Engine Mimics the Way the Brain Learns

human-brain4.jpg
Otopy's approach to simulating the way the brain learns material is similar to the way a student learns new material. For example if a student is about to learn a new subject the first thing the student needs is a set of reference materials written in a language the student understands. Then the student reads the material and asks questions as they arise to a professor. The professor provides clarification and the student continues; this process continues until the student is comfortable with the understanding of the new material.

The Otopy Story

map-of-switzerland.jpg
Otopy's technology is built on well-known and well established n-gram technology. N-grams (which are groups of words in a specific order), while they have been known since the late 1950s, have not been used for broad or large corpus search technology with much success, because, when the word count goes beyond bi-grams (n>2), the number of terms (or n-grams) explodes exponentially, quickly becoming unmanageable.

The personal history of Otopy co-founder and co-inventor Dan Kikinis plays a special role in the insight that led to Otopy. He grew up in Switzerland, speaking multiple languages concurrently. From this experience came his insight that machine translation must be done in groups of words or idioms describing concepts, to be more akin to human thinking. Dan came up with a new way of doing this that Otopy calls u-grams. To process n-grams or u-grams successfully, gigabytes to exabytes of memory are necessary. Otopy's u-grams are at the lower end of the range, while n-grams are at the higher end of the range and hence still impractical for the foreseeable future. This large amount of memory is needed because minimum of four or five words, i.e., 4-grams or 5-grams, are necessary for meaningful translation. (Today, hardware is readily available to support up to 16+-grams of Otopy's technology.)

Multiple Corpora and Cross-Relating Concepts

cross-concepts.jpg
The idea behind Otopy's search technology is that the corpus of knowledge is organized into relative vector spaces, and multiple languages have parallel vector spaces or parallel bodies, akin, for example, to the concept of parallel universes. Therefore, finding the location of a concept in a vector space in one language allows a jump into the same area in the vector space of another language, so nearby terms can be searched to find the correct translation, rather than searching word by word. Only a few trans-spatial vectors are needed to find the “general neighborhood,” and in most cases these can be easily provided by simple dictionaries or similar reference material. However, one of the remaining problems is that in such an approach, n-grams grow exponentially, and a related problem is filtering the resulting volume of n-grams. Otopy has solved this “exploding index” problem and has successfully developed a digital brain capable of mimic the way the human brain learns.

© 2012-2016 Otopy, Inc. All rights reserved. Privacy Policy