Skip to content

Apheris Model Registry: BERT tiny🔗

BERT tiny is an open-source model for language processing. It is designed to be lightweight and fast, while still performing well on a variety of NLP tasks.

These model weights are based on the Hugging Face implementation available in their Model Hub. Users familiar with Hugging Face will find this model easy to use and modify.

Federating BERT tiny on Apheris🔗

This implementation of BERT tiny uses the Hugging Face API. To federate it, we leverage the NVIDIA FLARE federation engine, as well as the Apheris platform. For instructions on how to run the model in a centralized fashion, here is a good starting point.

Pre-Processing🔗

The federated implementation is similar to the centralized version. For security purposes federation clients are not allowed to access Hugging Face. The tokenizer and model architecture are built into the runtime image. This allows for local access without unnecessary egress.

Training🔗

Once pre-processed the model is run on every Gateway as it would be in a centralized fashion. The difference between weights before/after training are centralized for aggregation.

Once the requested number of rounds have completed, you can download the resulting weights from the Orchestrator using the CLI.

Datasets🔗

For simplicity we hard-code the IMDB dataset so the model will run without needing to specify a dataset. The get_dataset method in utils.py should be modified to point to the desired dataset.

Parameters for BERT tiny🔗

This implementation has no parameters. The intent is to make it easy to grasp, run and modify in a federated setting.