About 87,900 results
Open links in new tab
  1. We systematically study the scaling laws with respect to sparsity, autoencoder size, and language model size. To demonstrate that our methodology can scale reliably, we train a 16 million …

  2. Sparse Autoencoders in Deep Learning - GeeksforGeeks

    Apr 8, 2025 · This is an implementation that shows how to construct a sparse autoencoder with TensorFlow and Keras in order to learn useful representations of the MNIST dataset. The …

  3. Scaling and evaluating sparse autoencoders - arXiv.org

    We systematically study the scaling laws with respect to sparsity, autoencoder size, and language model size. To demonstrate that our methodology can scale reliably, we train a 16 million …

  4. sparse_autoencoders.ipynb - Colab - Google Colab

    In this notebook, we will explore one of the cutting-edge approaches to interpreting superposition: sparse autoencoders (SAE). SAEs are a type of neural network used in unsupervised learning...

  5. These notes describe the sparse autoencoder learning algorithm, which is one approach to automatically learn features from unlabeled data.

  6. openai/sparse_autoencoder - GitHub

    sparse autoencoders trained on the GPT2-small model's activations. See sae-viewer to see the visualizer code, hosted publicly here. See model.py for details on the autoencoder model …

    Missing:

    • Figure

    Must include:

  7. Sparse Autoencoder Explained - Papers With Code

    A Sparse Autoencoder is a type of autoencoder that employs sparsity to achieve an information bottleneck. Specifically the loss function is constructed so that activations are penalized within …

  8. Figure.1. Internal structure of sparse autoencoder

    In this paper, we develop an Artificial Neural Networks (ANN) based softmax classifier mated to ANN sparse autoencoders to classify human's hand and lower arm movements in healthy …

  9. What happens in Sparse Autoencoder | by Syoya Zhou - Medium

    Dec 4, 2018 · Autoencoders are an important part of unsupervised learning models in the development of deep learning. While autoencoders aim to compress representations and …

  10. Recent work has shown that sparse autoencoders (SAEs) are able to effectively discover human-interpretable features in language models, at scales ranging from toy models to state-of-the-art …

  11. Some results have been removed