2024-07-28 14:24:14
That is a picture of an autoencoder, a totally different kind of machine learning model, not a large language model. Autoencoders work by compressing their input, and are more analogous to the genome than LLMs [Link] NEEDS_MORE_RATINGS(1-0-0) Author
2024-07-28 16:50:31
Modelling the computational complexity stored in the human genome would involve at minimum accounting for molecular information and thermodynamics. There is no evidence the sub components of a large language model such as the attention mechanism is stored in the genome. [Link] NEEDS_MORE_RATINGS(0-0-0) Author