Library Stella Biderman Library Stella Biderman

trlX

A repo for distributed training of language models with Reinforcement Learning via Human Feedback (RLHF)

Read More
Dataset Stella Biderman Dataset Stella Biderman

Proof-Pile-2

A 55 billion token dataset of mathematical and scientific documents, created for training the LLeMA models.

Read More
Model Stella Biderman Model Stella Biderman

Pythia

A suite of models designed to enable controlled scientific research on transparently trained LLMs

A suite of 16 models with 154 partially trained checkpoints designed to enable controlled scientific research on openly accessible and transparently trained large language models.

Read More
Library Stella Biderman Library Stella Biderman

tuned-lens

A library implementing the Tuned Lens, along with other tools for extracting, manipulating, and studying the learned representations of transformers across layers.

https://github.com/norabelrose/tuned-lens

Read More
Model Stella Biderman Model Stella Biderman

Polyglot-Ko

A series of Korean autoregressive language models made by the EleutherAI polyglot team. We currently have trained and released 1.3B, 3.8B, and 5.8B parameter models.

Polyglot-Ko is a series of Korean autoregressive language models made by the EleutherAI polyglot team. We currently have trained and released 1.3B, 3.8B, and 5.8B parameter models.

Read More
Model, Library Stella Biderman Model, Library Stella Biderman

RWKV

RWKV is an RNN with transformer-level performance at some language modeling tasks. Unlike other RNNs, it can be scaled to tens of billions of parameters efficiently.

RWKV is an RNN with transformer-level performance at some language modeling tasks. Unlike other RNNs, it can be scaled to tens of billions of parameters quite efficiently.

Read More
Model, Library Stella Biderman Model, Library Stella Biderman

OpenFold

A trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold2

Trainable, memory-efficient, and GPU-friendly PyTorch reproduction of AlphaFold2

Read More
Stella Biderman Stella Biderman

trlX

A library for distributed and performant training of language models with Reinforcement Learning via Human Feedback (RLHF), created by the CarperAI team.

Read More
Dataset Stella Biderman Dataset Stella Biderman

Simulacra Aesthetic Captions

A dataset of prompts, synthetic AI generated images, and aesthetic ratings of those images.

Simulacra Aesthetic Captions is a dataset of over 238000 synthetic images generated with AI models such as CompVis latent GLIDE and Stable Diffusion from over forty thousand user submitted prompts. The images are rated on their aesthetic value from 1 to 10 by users to create caption, image, and rating triplets. In addition to this each user agreed to release all of their work with the bot: prompts, outputs, ratings, completely public domain under the CC0 1.0 Universal Public Domain Dedication. The result is a high quality royalty free dataset with over 176000 ratings.

Read More
Model Stella Biderman Model Stella Biderman

GPT-NeoX-20B

An open source English autoregressive language model trained on the Pile. At the time of its release, it was the largest publicly available language model in the world.

GPT-NeoX-20B is a open source English autoregressive language model trained on the Pile,. At the time of its release, it was the largest publicly available language model in the world.

Read More
Library Stella Biderman Library Stella Biderman

GPT-NeoX

A library for efficiently training large language models with tens of billions of parameters in a multimachine distributed context. This library is currently maintained by EleutherAI.

A library for efficiently training large language models with tens of billions of parameters in a multimachine distributed context.

Read More
Model Guest User Model Guest User

CARP

A CLIP-like model trained on (text, critique) pairs with the goal of learning the relationships between passages of text and natural language feedback on those passages.

A CLIP-like model trained on (text, critique) pairs with the goal of learning the relationships between passages of text and natural language feedback on those passages.

Read More
Model Stella Biderman Model Stella Biderman

GPT-J

A six billion parameter open source English autoregressive language model trained on the Pile. At the time of its release it was the largest publicly available GPT-3-style language model in the world.

GPT-J is a six billion parameter open source English autoregressive language model trained on the Pile. At the time of its release it was the largest publicly available GPT-3-style language model in the world.

Read More
Library Stella Biderman Library Stella Biderman

LM Eval Harness

Our library for reproducible and transparent evaluation of LLMs.

Our library for reproducible and transparent evaluation of LLMs.

Read More