15:08 pm

Example Doc for Token Classification of Llama and Dependent/Copied Models

What is the Issue?

There was a need for example documentation for token classification using the Llama and its dependent/copied models. This lack of documentation made it difficult for developers to understand how to perform token classification with these models.

What does the PR do?

This PR adds example documentation for token classification using the Llama and its dependent/copied models. The models include Llama, Mistral, Mixttral, Nemotron, Persimmon, Qwen2, Qwen2Moe, StableLM, StarCoder2, Gemma (Modular), and Gemma2 (Modular).

Why is it Important?

Providing example documentation helps developers understand how to use the models for specific tasks, such as token classification. This PR makes it easier for developers to perform token classification with the Llama and its dependent/copied models.

Code Snippet

Here is a code snippet that shows the example documentation added in the PR:

# Example of token classification using LlamaForTokenClassification
from transformers import LlamaForTokenClassification, LlamaTokenizer
 
tokenizer = LlamaTokenizer.from_pretrained("llama-base")
model = LlamaForTokenClassification.from_pretrained("llama-base")
 
inputs = tokenizer("Hello, my name is Llama.", return_tensors="pt")
outputs = model(**inputs)

You can view the full PR here.


Links : Transformers

Tags :

Date : 16th March, Sunday, 2025, (Wikilinks: 16th March, March 25, March, 2025. Sunday)

Category : Others