Computational Bottlenecks of Training Small-scale Large Language ModelsPublished in NeurIPS Workshop on Efficient Natural Language and Speech Processing, 2024Direct LinkShare on Bluesky Facebook LinkedIn X (formerly Twitter) Previous Next