SPECTER 2.0
SPECTER 2.0 is the successor to SPECTER and is capable of generating task specific embeddings for scientific tasks when paired with adapters.
Given the combination of title and abstract of a scientific paper or a short texual query, the model can be used to generate effective embeddings to be used in downstream applications.
Model Details
Model Description
SPECTER 2.0 has been trained on over 6M triplets of scientific paper citations, which are available here.
Post that it is trained on all the SciRepEval training tasks, with task format specific adapters.
Task Formats trained on:
- Classification
- Regression
- Proximity
- Adhoc Search
It builds on the work done in SciRepEval: A Multi-Format Benchmark for Scientific Document Representations and we evaluate the trained model on this benchmark as well.
- Developed by: Amanpreet Singh, Mike D’Arcy, Arman Cohan, Doug Downey, Sergey Feldman
- Shared by : Allen AI
- Model type: bert-base-uncased + adapters
- License: Apache 2.0
- Finetuned from model: allenai/scibert.
Model Sources
- Repository: https://github.com/allenai/SPECTER2_0
- Paper: https://api.semanticscholar.org/CorpusID:254018137
- Demo: Usage
Uses
Direct Use
Model | Type | Name and HF link |
---|---|---|
Base | Transformer | allenai/specter2 |
Classification | Adapter | allenai/specter2_classification |
Regression | Adapter | allenai/specter2_regression |
Retrieval | Adapter | allenai/specter2_proximity |
Adhoc Query | Adapter | allenai/specter2_adhoc_query |
前往AI网址导航