FotNET 2.5.2

There is a newer version of this package available.
See the version list below for details.
dotnet add package FotNET --version 2.5.2                
NuGet\Install-Package FotNET -Version 2.5.2                
This command is intended to be used within the Package Manager Console in Visual Studio, as it uses the NuGet module's version of Install-Package.
<PackageReference Include="FotNET" Version="2.5.2" />                
For projects that support PackageReference, copy this XML node into the project file to reference the package.
paket add FotNET --version 2.5.2                
#r "nuget: FotNET, 2.5.2"                
#r directive can be used in F# Interactive and Polyglot Notebooks. Copy this into the interactive tool or source code of the script to reference the package.
// Install FotNET as a Cake Addin
#addin nuget:?package=FotNET&version=2.5.2

// Install FotNET as a Cake Tool
#tool nuget:?package=FotNET&version=2.5.2                

Alt Text

FotNET

FotNET is a simple library for working with REGIONAL-CONVOLUTION, RECURRENT, GENERATIVE-ADVERSARIAL, CONVOLUTION and CLASSIC NEURAL NETWORKS like PERCEPTRON. The main part is that u can create ur own neural network without libraries that takes all work. This project is open source and u can see a code, download him and change every part what u need cuz it very simple to understand this project with xml documentation. Alt Text

Introduction:


Importing:

First of all u should download this library and includ it into ur own project. U can do this with NUGET or MANUALY by downloading source code.

Using FotNET;

Using FotNET.NETWORK;


Creation:

For creating neural network class u need do SIMPLE steps:

  1. Create neural network object:

     Network netwok = new Network(layers);
    

1.1. Also, before we start, u need to choose a model of neural network. U can do this by creating a List of Layers:

	List<ILayer> layers = new List<ILayer> {
		new ConvolutionLayer(); // Input tensor get convolved by filters
		new ActivationLayer(); // Input tensor get activated
		new DropoutLayer();
		new PoolingLayer(); // Input tensor get pooled
		new ConvolutionLayer();
		new ActivationLayer();
		new DropoutLayer();
		new PoolingLayer();
		new FlattenLayer(); // Input tensor get converted to 1d tensor
		new PerceptronLayer(); // Input 1d tensor multiply with weights 
		new ActivationLayer();
		new DropoutLayer();
		new PerceptronLayer();
		new ActivationLayer();
		new SoftMaxLayer(); 
	}
	
	>> OR
	
	List<ILayer> layers = new List<ILayer> {
		new FlattenLayer();
		new RecurrentLayer();
		new SoftMaxLayer(); 
	}
	
	>> OR
	
	List<ILayer> layers = new List<ILayer> {
		new RoughenLayer(); // converts 1D tensor to multi-dimensional tensor
		new DeconvolutionLayer(); // Input tensor get convolved by filters
		new ActivationLayer(); // Input tensor get activated
		new DeconvolutionLayer();
		new ActivationLayer();
		new NormalizationLayer();
		new DataLayer();
	}
	
	>> OR
	
	List<ILayer> layers = new List<ILayer> {
		new BatchNormalizationLayer(); // Batch normalizer
		new UpSampleLayer(); // Up sample input tensor by one of types of up sampling
		new ConvolutionLayer();
		new ActivationLayer(); // Input tensor get activated
		new UpSampleLayer(); 
		new ConvolutionLayer();
		new ActivationLayer(); 
		new NormalizationLayer();
		new DataLayer();
	}

1.1.1. Every layer needs a parametrs, that u should choose by ur self:

	new ConvolutionLayer(filterCount, filterHeight, filterWeight, filterDepth, weightInitialization, convolutionStride);
	// or
	new ConvolutionLayer(pathToFilter, convolutionStride); // path to filter is a path to ur custom filter. Example of custom filters u can find in the end of ReadMe.
	
	new ActivationLayer(activateFunction);
	new PoolingLayer(poolingType, poolingSize);
	new PerceptronLayer(size, sizeOfNextLayer, weightInitialization);
	new PerceptronLayer(size);
	new DropoutLayer(percentOfDropped);
	new RecurrentLayer(activationFunction, recurrencyType, hiddenLayerSize, weightInitialization);
	new DeconvolutionLayer(filterCount, filterHeight, filterWeight, filterDepth, weightInitialization, convolutionStride);
	new NormalizationLayer(normalizationType);
	

1.1.1.1. Types of pooling u can find here:

	new MaxPooling();
	new MinPooling();
	new AveragePooling();
	new BilinearPooling();

1.1.1.2. Variants of weight initialization:

	new HeInitialization(); // Usualy uses with ReLU, Leaky ReLU and Double Leaky ReLU
	new XavierInitialization(); // Also uses with Tanghensoid and Sighmoid
	new NormalizedXavierInitialization(); // Same with upper activation function, but normalized
	new ConstInitialization();
	new RandomInitialization();
	new ZeroInitialization();
	new LeCunNormalization();

1.1.1.3. Types of normalization:

	new Abs(); // Normalize tensor with Abs
	new MinMax(); // Normalize tensor between min and max value

1.1.1.4. Types of Up sampling

	new NearestNeighbor();
	new BilinearInterpolation();
	new BicubicInterpolation();

1.2. After it u should choose one of ACTIVATION FUNCTIONS or create ur own, but dont forget add Function abstract class:

	new ReLu();
	new LeakyReLu();
	new DoubleLeakyReLu();
	new Sigmoid();
	new Tangensoid();
	new GeLu();
	new BinaryStep();
	new SoftPlus();
	new Identity();
	new Swish();
	new HardSigmoid();
	new SiLu();
	new SeLu();
	new HyperbolicTangent();

ForwardFeed:
	network.ForwardFeed(tensor); // Put image or any tensor. 
	
	>> OR
	
	network.ForwardFeed(tensor, answerType); // Answer type - class or value of class

Neural network after FORWARD FEED depending of ur chose return a INDEX of a predicted class from classes that u add on last PERCEPTRON LAYER or VALUE. Or, if u don`t add answer type option, returns tensor of answers.

If predicted class is wrong, we going to Back Propagation.


BackPropagation:
	network.BackPropagation(expectedClass, expectedValue, lossFunction, learningRate, backPropagateStatus); // Index of expected class and a value (usualy is 1)
	
	>> OR
	
	network.BackPropagation(expectedTensor, lossFunction, learningRate, backPropagateStatus);
	
	>> OR
	
	network.BackPropagation(error, learningRate, backPropagateStatus);
	

U can see lossFunction option:

	new Mse();
	new Mae();
	new Cost();
	new CeLl();
	new Mape();
	new Mbe();

If expected class is different that was predicted, we should use BACKPROPAGATION method.


Save and load weights:
Save:

All weights can be saved by using next method.

EXAMPLE:

	data = network.GetWeights();
Load:

For loading weights u should use same alghoritm.

EXAMPLE:

	network.LoadWeights(data);

Fitting and Testing
Fitting

Network class can be fitted and tested, and all what u need is a data set (list of data objects) and count of epochs (for fitting). Lets start on creation of data set.

	network.Fit(dataType, csvPath, csvConfig, epochCount, baseLearningRate);
	// dataType  -> Array or Tensor image
	// csvPath   -> path to csv file 
	// csvConfig -> rule that includes how should be processed csv file
	/*
	DataConfig csvConfig = new DataConfig() {
		StartRow          = 1, 
		InputColumnStart  = 1,
		InputColumnEnd    = 10,
		OutputColumnStart = 12,
		OutputColumnEnd   = 16,
		Delimiters        = new[] {";"}
	}
	*/
	// epochCount -> count of epochs
	// baseLearningRate -> start learning rate 

This is a implementaion of data set. "new Image()" as u can see - implementaion of image. First part where we can see "new double[,,]" very simple and means x,y and depth. Second part is answer for this data set unit. For example we should choose one of ten numbers, and we should create an array where index of needed element have vale equals 1.

Testing

After fitting u can test ur own model by using next method with test data set:

	double accuracy = network.Test(dataType, csvPath, csvConfig);

Examples

Custom filters

U should create a custom file.txt, after fill it like:

1 0 1
1 0 1
1 0 1

If u want add few filters, add other constructions and separate them by '//n':

1 0 1
1 0 1
1 0 1/
0 0 0
1 1 1
0 0 0/
1 0 0
0 1 0
0 0 1
Product Compatible and additional computed target framework versions.
.NET net6.0 is compatible.  net6.0-android was computed.  net6.0-ios was computed.  net6.0-maccatalyst was computed.  net6.0-macos was computed.  net6.0-tvos was computed.  net6.0-windows was computed.  net7.0 is compatible.  net7.0-android was computed.  net7.0-ios was computed.  net7.0-maccatalyst was computed.  net7.0-macos was computed.  net7.0-tvos was computed.  net7.0-windows was computed.  net8.0 was computed.  net8.0-android was computed.  net8.0-browser was computed.  net8.0-ios was computed.  net8.0-maccatalyst was computed.  net8.0-macos was computed.  net8.0-tvos was computed.  net8.0-windows was computed. 
Compatible target framework(s)
Included target framework(s) (in package)
Learn more about Target Frameworks and .NET Standard.

NuGet packages

This package is not used by any NuGet packages.

GitHub repositories

This package is not used by any popular GitHub repositories.

Version Downloads Last updated
2.5.3 173 5/10/2023
2.5.2 126 5/1/2023
2.5.0 139 4/21/2023
2.4.0 123 4/13/2023
2.3.1 119 4/10/2023
2.3.0 129 4/9/2023
2.2.2 113 4/7/2023
2.2.1 121 4/6/2023
2.2.0 117 4/3/2023
2.1.0 240 4/1/2023
2.0.3 207 3/31/2023
2.0.2 238 3/29/2023
2.0.1 234 3/28/2023

Relise version