AmpScm.RepoDb.PostgreSql.BulkOperations 1.2026.408.481

dotnet add package AmpScm.RepoDb.PostgreSql.BulkOperations --version 1.2026.408.481
                    
NuGet\Install-Package AmpScm.RepoDb.PostgreSql.BulkOperations -Version 1.2026.408.481
                    
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="AmpScm.RepoDb.PostgreSql.BulkOperations" Version="1.2026.408.481" />
                    
For projects that support PackageReference, copy this XML node into the project file to reference the package.
<PackageVersion Include="AmpScm.RepoDb.PostgreSql.BulkOperations" Version="1.2026.408.481" />
                    
Directory.Packages.props
<PackageReference Include="AmpScm.RepoDb.PostgreSql.BulkOperations" />
                    
Project file
For projects that support Central Package Management (CPM), copy this XML node into the solution Directory.Packages.props file to version the package.
paket add AmpScm.RepoDb.PostgreSql.BulkOperations --version 1.2026.408.481
                    
#r "nuget: AmpScm.RepoDb.PostgreSql.BulkOperations, 1.2026.408.481"
                    
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
#:package AmpScm.RepoDb.PostgreSql.BulkOperations@1.2026.408.481
                    
#:package directive can be used in C# file-based apps starting in .NET 10 preview 4. Copy this into a .cs file before any lines of code to reference the package.
#addin nuget:?package=AmpScm.RepoDb.PostgreSql.BulkOperations&version=1.2026.408.481
                    
Install as a Cake Addin
#tool nuget:?package=AmpScm.RepoDb.PostgreSql.BulkOperations&version=1.2026.408.481
                    
Install as a Cake Tool

MSBuild-CI Version GitterChat

RepoDb.PostgreSql.BulkOperations

An extension library that contains the official Bulk Operations of RepoDB for PostgreSQL.

Important Pages

Why use the Bulk Operations?

Bulk operations allow you to perform high-performance insert, update, delete, and merge operations on large datasets. Unlike regular operations, bulk operations bypass database constraints and logging to maximize performance—often improving performance by more than 90% when processing large datasets.

Basically, with normal Delete, Insert, Merge, and Update operations, the data is processed in an atomic way. With batch operations, multiple single operations are batched and executed together, but this still involves round-trips between your application and the database.

With bulk operations, all data is brought from your client application to your database in one go via the BinaryImport operation (a real bulk process). It then post processes the data altogether in the database server to maximize performance. During the operation, the process ignores the audit, logs, constraints and any other database special handling.

Core Features

Community Engagements

License

Apache-2.0


Installation

At the Package Manager Console, write the command below.

> Install-Package AmpScm.RepoDb.PostgreSql.BulkOperations

Then call the setup once.

using RepoDb;
using Npgsql;

GlobalConfiguration.Setup().UsePostgreSql();

See the Bulk Operations Guide for more information.

Special Arguments

The arguments qualifiers, keepIdentity, identityBehavior, pseudoTableType and mergeCommandType are provided in most operations (see Bulk Operations Guide).

The argument qualifiers is used to define the qualifier fields to be used in the operations. It usually refers to the WHERE expression of SQL Statements. If not given, the primary key field will be used.

The argument keepIdentity is used to define a value whether the identity property of the entity/model will be kept during the operation.

The argument identityBehavior is used to define a value like with the keepIdentity argument, together-with, a value that is used to return the newly generated identity values from the database.

The argument pseudoTableType is used to define a value whether a physical pseudo-table will be created during the operation. By default, a temporary table is used.

The argument mergedCommandType is used to define a value whether the existing ON CONFLICT DO UPDATE will be used over the UPDATE/INSERT SQL commands during operations.

Identity Setting Alignment

Behind the scene, the library has enforced an additional logic to ensure the identity setting alignment. Basically, a new column named __RepoDb_OrderColumn is being added into the pseudo-temporary table if the identity field is present on the underlying table. This column will contain the actual index of the entity model from the IEnumerable<T> object.

During the bulk operation, a dedicated index (entity model index) value is passed to this column, thus ensuring that the index value is really equating to the index of the item from the IEnumerable<T> object. The resultsets of the pseudo-temporary table are being ordered using this column, prior the actual merge to the underlying table.

For both the BinaryBulkInsert and BinaryBulkMerge operations, when the newly generated identity value is being set back to the data model, the value of the __RepoDb_OrderColumn column is being used to look-up the proper index of the equating item from the IEnumerable<T> object, then, the compiled identity-setter function is used to assign back the identity value into the identity property.

BatchSize

All the provided operations has a batchSize attribute that enables you to override the size of the items being wired-up to the server during the operation. By default it is null, all the items are being sent together in one-go.

Use this attribute if you wish to optimize the operation based on certain sitution (i.e.: No. of Columns, Type/Size of Data, Network Latency).

Async Methods

All the provided synchronous operations has its equivalent asynchronous (Async) operations.

BinaryBulkDelete

Delete the existing rows from the database by bulk. It returns the number of rows that has been deleted during the operation.

BinaryBulkDelete via DataEntities

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var deletedRows = connection.BinaryBulkDelete<Customer>(customers);
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var deletedRows = connection.BinaryBulkDelete<Customer>(customers, qualifiers: e => new { e.LastName, e.DateOfBirth });
}

Or via table-name.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var deletedRows = connection.BinaryBulkDelete("Customer", customers);
}

And with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var deletedRows = connection.BinaryBulkDelete("Customer", customers, qualifiers: Field.From("LastName", "DateOfBirth"));
}

BinaryBulkDelete via DataTable

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var table = GetCustomersAsDataTable();
	var deletedRows = connection.BinaryBulkDelete("Customer", table);
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var table = GetCustomersAsDataTable();
	var deletedRows = connection.BinaryBulkDelete("Customer", table, qualifiers: Field.From("LastName", "DateOfBirth"));
}

BinaryBulkDelete via DbDataReader

using (var connection = new NpgsqlConnection(ConnectionString))
{
	using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
	{
		var deletedRows = connection.BinaryBulkDelete("Customer", reader);
	}
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
	{
		var deletedRows = connection.BinaryBulkDelete("Customer", reader, qualifiers: Field.From("LastName", "DateOfBirth"));
	}
}

BinaryBulkDeleteByKey

Delete the existing rows from the database by bulk via a list of primary keys. It returns the number of rows that has been deleted during the operation.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var primaryKeys = new [] { 1, 2, ..., 10045 };
	var deletedRows = connection.BinaryBulkDeleteByKey(primaryKeys);
}

BinaryBulkInsert

Insert a list of entities into the database by bulk. It returns the number of rows that has been inserted in the database.

BinaryBulkInsert via DataEntities

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var insertedRows = connection.BinaryBulkInsert<Customer>(customers);
}

Or via table-name.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var insertedRows = connection.BinaryBulkInsert("Customer", customers);
}

BinaryBulkInsert via DataTable

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var table = GetCustomersAsDataTable();
	var insertedRows = connection.BinaryBulkInsert("Customer", table);
}

BinaryBulkInsert via DbDataReader

using (var connection = new NpgsqlConnection(ConnectionString))
{
	using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
	{
		var insertedRows = connection.BinaryBulkInsert("Customer", reader);
	}
}

BinaryBulkMerge

Merge a list of entities into the database by bulk. A new row is being inserted (if not present) and an existing row is being updated (if present) through the defined qualifiers. It returns the number of rows that has been inserted/updated in the database.

BinaryBulkMerge via DataEntities

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var mergedRows = connection.BinaryBulkMerge<Customer>(customers);
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var mergedRows = connection.BinaryBulkMerge<Customer>(customers, qualifiers: e => new { e.LastName, e.DateOfBirth });
}

Or via table-name.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var mergedRows = connection.BinaryBulkMerge("Customer", customers);
}

And with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var mergedRows = connection.BinaryBulkMerge("Customer", customers, qualifiers: Field.From("LastName", "DateOfBirth"));
}

BinaryBulkMerge via DataTable

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var table = GetCustomersAsDataTable();
	var mergedRows = connection.BinaryBulkMerge("Customer", table);
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var table = GetCustomersAsDataTable();
	var mergedRows = connection.BinaryBulkMerge("Customer", table, qualifiers: Field.From("LastName", "DateOfBirth"));
}

BinaryBulkMerge via DbDataReader

using (var connection = new NpgsqlConnection(ConnectionString))
{
	using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
	{
		var mergedRows = connection.BinaryBulkMerge("Customer", reader);
	}
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
	{
		var mergedRows = connection.BinaryBulkMerge("Customer", reader, qualifiers: Field.From("LastName", "DateOfBirth"));
	}
}

BinaryBulkUpdate

Update the existing rows from the database by bulk. The affected rows are strongly bound to the values of the qualifier fields when calling the operation. It returns the number of rows that has been updated in the database.

BinaryBulkUpdate via DataEntities

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var rows = connection.BinaryBulkUpdate<Customer>(customers);
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var rows = connection.BinaryBulkUpdate<Customer>(customers, qualifiers: e => new { e.LastName, e.DateOfBirth });
}

Or via table-name.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var rows = connection.BinaryBulkUpdate("Customer", customers);
}

And with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var customers = GetCustomers();
	var rows = connection.BinaryBulkUpdate("Customer", customers, qualifiers: Field.From("LastName", "DateOfBirth"));
}

BinaryBulkUpdate via DataTable

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var table = GetCustomersAsDataTable();
	var rows = connection.BinaryBulkUpdate("Customer", table);
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	var table = GetCustomersAsDataTable();
	var rows = connection.BinaryBulkUpdate("Customer", table, qualifiers: Field.From("LastName", "DateOfBirth"));
}

BinaryBulkUpdate via DbDataReader

using (var connection = new NpgsqlConnection(ConnectionString))
{
	using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
	{
		var rows = connection.BinaryBulkUpdate("Customer", reader);
	}
}

Or with qualifiers.

using (var connection = new NpgsqlConnection(ConnectionString))
{
	using (var reader = connection.ExecuteReader("SELECT * FROM [dbo].[Customer];"))
	{
		var rows = connection.BinaryBulkUpdate("Customer", reader, qualifiers: Field.From("LastName", "DateOfBirth"));
	}
}
Product Compatible and additional computed target framework versions.
.NET net5.0 was computed.  net5.0-windows was computed.  net6.0 was computed.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 was computed.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 is compatible.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed.  net9.0 is compatible.  net9.0-android was computed.  net9.0-browser was computed.  net9.0-ios was computed.  net9.0-maccatalyst was computed.  net9.0-macos was computed.  net9.0-tvos was computed.  net9.0-windows was computed.  net10.0 is compatible.  net10.0-android was computed.  net10.0-browser was computed.  net10.0-ios was computed.  net10.0-maccatalyst was computed.  net10.0-macos was computed.  net10.0-tvos was computed.  net10.0-windows was computed. 
.NET Core netcoreapp2.0 was computed.  netcoreapp2.1 was computed.  netcoreapp2.2 was computed.  netcoreapp3.0 was computed.  netcoreapp3.1 was computed. 
.NET Standard netstandard2.0 is compatible.  netstandard2.1 was computed. 
.NET Framework net461 was computed.  net462 was computed.  net463 was computed.  net47 was computed.  net471 was computed.  net472 was computed.  net48 was computed.  net481 was computed. 
MonoAndroid monoandroid was computed. 
MonoMac monomac was computed. 
MonoTouch monotouch was computed. 
Tizen tizen40 was computed.  tizen60 was computed. 
Xamarin.iOS xamarinios was computed. 
Xamarin.Mac xamarinmac was computed. 
Xamarin.TVOS xamarintvos was computed. 
Xamarin.WatchOS xamarinwatchos was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last Updated
1.2026.408.481 56 4/8/2026
1.2026.329.480 141 3/29/2026
1.2026.317.475 225 3/17/2026
1.2026.314.468 212 3/16/2026
1.2026.309.434 212 3/9/2026
1.2026.304.420 211 3/4/2026
1.2026.302.405 236 3/2/2026
1.15.2602.20384 223 2/20/2026
1.15.2512.22350 351 12/22/2025
1.14.2508.12328 420 8/12/2025
1.14.2508.4318 366 8/4/2025
1.14.2507.18312 267 7/18/2025
1.14.2507.18302 302 7/18/2025
1.14.2507.17297 333 7/17/2025
1.14.2507.4263 288 7/4/2025
1.14.2507.2257 334 7/2/2025
1.14.2506.30251 331 6/30/2025
1.14.2506.18235 355 6/18/2025
1.14.2506.18231 333 6/18/2025
1.14.2506.10222 499 6/12/2025
Loading failed