I’ve been working on an implementation of semantic folding, and I have reached the point where I can generate word SDRs with proper semantics encoded in them (crawling Wikipedia for the input). The process takes days and eats up a ton of disk space for caching while it is running, but it works. I’ve done some comparisons of SDR operations between my SDRs and between cortical.io’s word fingerprints, and get a similar level of usability from them. I am still struggling with topology (the “folding” part of semantic folding), but thought I would build an application that could make use of what I have so far.
The idea for this application will be a “support chat” type of AI, which starts by asking “How can I help you today?”. The user is free to type in whatever they want. The application would then do some type of SDR comparisons to read the semantics of what the user types in, and use that to determine what the user wants to do from a list of possible actions.
My first thought is to create a tree structure in which the outer nodes represent goals. Top level might be “Report a bug”, “Install the application”, and “Leave feedback”. Next level under “Install the application” might be an OS selection: “Windows”, “Mac”, and “Linux”. And so-on down to the actual goal.
Each element in the tree structure would have an associated question that the system could ask. For example, “Which OS will you be installing the application on?” It would try to answer all of the questions itself based on the semantics of what the user typed, using an overlap threshold. Anything under the threshold, the tool would then ask the user. The semantics of what the user types in answer to the question will be used to answer more questions. Once all questions necessary to reach a goal have been answered, the system will perform that action (show a user guide, etc).
Thought I would post here to get some ideas and feedback from the community, and to post my progress on the application. Stay tuned for more info!