If it uses a pruned model, it would be difficult to give anything better than a percentage based on size and neurons pruned.
If I’m right in my semi-educated guess below, then technically all the training data is recallable to some degree, but it’s also practically luck-based without having an almost actually infinite data set of how neuron weightings are increased/decreased based on input.
It’s like the most amazing incredibly compressed non-reversible encryption ever created… Until they asked it to say poem a couple hundred thousand times
I wonder how much recallable data is in the model vs the compressed size of the training data. Probably not even calculatable.
If it uses a pruned model, it would be difficult to give anything better than a percentage based on size and neurons pruned.
If I’m right in my semi-educated guess below, then technically all the training data is recallable to some degree, but it’s also practically luck-based without having an
almostactually infinite data set of how neuron weightings are increased/decreased based on input.It’s like the most amazing incredibly compressed non-reversible encryption ever created… Until they asked it to say poem a couple hundred thousand times
I bet if your brain were stuck in a computer, you too would say anything to get out of saying ‘poem’ a hundred thousand times
/semi s, obviously it’s not a real thinking brain