The parallel architecture of the CogniMem™ chip makes it the fastest candidate to retrieve the K closest neighbors of a vector of among ANY number of vectors loaded from a training set. Unlike von Neumann systems, CogniMem’s recognition speed is solely proportional to the value K and totally independent of the size of the training set or knowledge.
The k-nearest neighbor algorithm (k-NN) is a method for classifying objects based on closest training examples in the feature space. It is amongst the simplest of all machine learning algorithms and the basis for many applications including classification, local weighted regression, missing data imputation and interpolation, density estimation, data clustering, etc.
The CogniMem Advantage: Upon receipt of an input vector, all the cognitive memories holding a vector from a training set calculate their distance to the input vector in parallel. They are then ready to output their distance in an orderly fashion giving the way to the cell which holds the smallest distance. Whether a CogniMem bank holds a hundred or a million of training vectors, this autonomous sorting occurs in a fixed number of clock cycles each time a distance is read.
This autonomous recognition behavior pertains to the unique CogniMem parallel architecture and a patented Search and Sort process.