PortedFastBertTokenizer 0.4.6-beta

This is a prerelease version of PortedFastBertTokenizer.
dotnet add package PortedFastBertTokenizer --version 0.4.6-beta                
NuGet\Install-Package PortedFastBertTokenizer -Version 0.4.6-beta                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="PortedFastBertTokenizer" Version="0.4.6-beta" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add PortedFastBertTokenizer --version 0.4.6-beta                
#r "nuget: PortedFastBertTokenizer, 0.4.6-beta"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install PortedFastBertTokenizer as a Cake Addin
#addin nuget:?package=PortedFastBertTokenizer&version=0.4.6-beta&prerelease

// Install PortedFastBertTokenizer as a Cake Tool
#tool nuget:?package=PortedFastBertTokenizer&version=0.4.6-beta&prerelease                

<p align="center"> <a href="https://www.nuget.org/packages/FastBertTokenizer/"> <img alt="FastBertTokenizer logo" src="logo.svg" width="100" /> </a> </p>

FastBertTokenizer

NuGet version (FastBertTokenizer) .NET Build

A fast and memory-efficient library for WordPiece tokenization as it is used by BERT. Tokenization results are tested against the outputs of HuggingFace Transformers' AutoTokenizer.

Serves similar needs and initially inspired by BERTTokenizers - thanks for the great work.

Features

Getting started

using FastBertTokenizer;

var tok = new BertTokenizer();
var maxTokensForModel = 512;
await tok.LoadVocabularyAsync("vocab.txt", true); // https://huggingface.co/BAAI/bge-small-en/blob/main/vocab.txt
var text = File.ReadAllText("TextFile.txt");
var (inputIds, attentionMask, tokenTypeIds) = tok.Tokenize(text, maxTokensForModel);
Console.WriteLine(string.Join(", ", inputIds.ToArray().Select(x => x.ToString())));

Comparison of Tokenization Results to HuggingFace Transformers' AutoTokenizer

For correctness verification about 10.000 articles of simple english Wikipedia were tokenized using FastBertTokenizer and Huggingface using the baai bge vocab.txt file. The tokenization results were exactly the same apart from these two cases:

  • Letter (id 6309) contains assamese characters. Many of them are not represented in the vocabulary used. Huggingface's tokenizer skips exactly one [UNK] token for one of the chars were FastBertTokenizer emits one.
  • Avignon (id 30153) has Rhône as the last word before hitting the 512 token id limit. If a word can not directly be found in the vocabulary, FastBertTokenizer we tries to tokenize prefixes of the word first, while Huggingface directly starts with a diacritic-free version of the word. Thus, FastBertTokenizer's result ends with token id for r while huggingface (correctly) emits rhone. This edge case is just relevant
    1. for the last word, after which the tokenized output is cut off and
    2. if this last word contains diacritics.

These minor differences might be irrelevant in most real-world use cases. All other tested >10.000 articles including chinese and korean characters as well as much less common scripts and right-to-left letters were tokenized exactly the same as by Huggingface's Tokenizer.

Comparison to BERTTokenizers

Note that while BERTTokenizers handles token type incorrectly, it does support input of two pieces of text that are tokenized with a separator in between. FastBertTokenizer currently does not support this.

Benchmark

Tokenizing the first 5000 characters of 10254 articles of simple english Wikipedia. ThinkPad T14s Gen 1, AMD Ryzen 7 PRO 4750U, 32GB memory

Method Mean Error StdDev Gen0 Gen1 Gen2 Allocated
BERTTokenizers 4,942.0 ms 54.79 ms 48.57 ms 1001000.0000 95000.0000 4000.0000 5952.43 MB
FastBertTokenizerAllocating 529.5 ms 8.90 ms 10.59 ms 61000.0000 31000.0000 2000.0000 350.75 MB
FastBertTokenizerMemReuse 404.5 ms 7.72 ms 7.22 ms 68000.0000 - - 136.83 MB

The FastBertTokenizerMemReuse benchmark writes the results of the tokenization to the same memory area while FastBertTokenizerAllocating allocates new memory for it's return values. See src/Benchmarks for details how these benchmarks were perfomed.

Created by combining https://icons.getbootstrap.com/icons/cursor-text/ in .NET brand color with https://icons.getbootstrap.com/icons/braces/.

Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
.NET Core netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.1 is compatible. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
0.4.6-beta 188 11/24/2023