Bi-conditional generative networks

https://www.researchgate.net/publication/332866344_BI-CONDITIONAL_GENERATIVE_NETWORKS

We outline a method where we imprint the memory and collective experience of an agent, for example the previous words said and in what order and by whom in a chatbot system. We are given a training corpus of conversations. Then a feed forward neural network is used to input a noise vector with the current word. At the start of the conversation we will input the zero vector with the dimensions of the noise vector that we will use. Together with the first word found in the conversation. This will output another noise vector and the 1 hot encoded next word. We will then feed the duo output once more into the network to predict the next word. During the learning process we will not alter the noise vectors predicted, but the error function will be parameterized by the word vector. This will marry the two progressions together. That of the noise and the structure of the conversation and which word ought to follow which in a sensible sentence. During operation we input the zero vector and the first word, then input the duo output from that back into the network to predict the next word to be said. When a live response is given to a chatbot the next word will not be predicted, but we shall still get another noise vector with which to feed back in with the next word, until an EOU (end of utterance) token vector is input alongside a noise vector. Then the system begins to predict the words to say as before using the noise vector that was produced after the EOU token was met. This produces an expected reply conditioned on the training corpus but different in its particulars.