About 2,250,000 results
Open links in new tab
  1. This paper presents a scalable neural-network (NN) inference accelerator in 16nm, based on an array of programmable cores employing mixed-signal In-Memory Computing (IMC), digital …

  2. Fast and robust analog in-memory deep neural network training

    Aug 20, 2024 · Here, we propose two improved algorithms for in-memory training, that retain the same fast runtime complexity while resolving the requirement of a precise zero point.

  3. CIMAT: A Compute-In-Memory Architecture for On-chip Training

    In this article, we propose CIMAT, a CIM Architecture for Training. At the bitcell level, we design two versions of 7T and 8T transpose SRAM to implement bi-directional vector-to-matrix …

  4. Compute in‐Memory with Non‐Volatile Elements for Neural Networks

    Dec 29, 2022 · Conceptually, cross-bar arrays emulate the synaptic connections between preneurons (array input) and post-neurons (array output) in a neural network; input stimuli …

  5. Compute-in-memory designs for deep neural network and …

    Compute-In-Memory (CIM) designs performing analog DNN computations within a memory array along with peripheral data converter circuits, are being explored to mitigate this ‘Memory Wall’ …

  6. CiMLoop: A Flexible, Accurate, and Fast Compute-In-Memory

    May 12, 2024 · Compute-In-Memory (CiM) is a promising solution to accelerate Deep Neural Networks (DNNs) as it can avoid energy-intensive DNN weight movement and use memory …

  7. An All-digital Compute-in-memory FPGA Architecture for Deep

    In this article, we propose an all-digital Compute-in-memory FPGA architecture for deep learning acceleration. Furthermore, we present a bit-serial computing circuit of the Digital CIM core for …

  8. A Hybrid-Domain Floating-Point Compute-in-Memory

    Feb 11, 2025 · Compute-in-memory (CIM) has shown significant potential in efficiently accelerating deep neural networks (DNNs) at the edge, particularly in speeding up quantized …

  9. Abstract—This paper presents an MRAM-based deep in-memory architecture (MRAM-DIMA) to efficiently implement multi-bit matrix vector multiplication for deep neural networks using a …

  10. A large-scale in-memory computing for deep neural network

    Nov 1, 2019 · Next, we show the quantized ResNet can be mapped to resistive random access memory (ReRAM) with in-memory computing architecture, which can achieve significantly …

Refresh