Edit model card

16-bit version of weights from PharMolix/BioMedGPT-LM-7B, for easier download / finetuning / model-merging

Code

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

m2 = AutoModelForCausalLM.from_pretrained("PharMolix/BioMedGPT-LM-7B",
                                          torch_dtype=torch.float16,
                                          device_map="auto")
Downloads last month
8
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for monsoon-nlp/BioMedGPT-16bit

Finetuned
this model