Create And Train Simple Artificial Neuron with python : Teach a Neuron to Sum two number

We will start this example with simplest neuron then train it to sum two number so it will be simple. And best part is for simplicity we will use only python standard library. We will write everything from scratch..

First the basics.

Like our biological neuron artificial neuron can learn and change over time.

Human behavior is partly based on reword (pleasure) and not reward (pain)

For example If we get happy from something we will repeat that next time. In the same way if we do not get happy from something we will try to avoid that in the next time.( for example food, object and other things)

More related
===========

If a student get 79 out of 100, he/she will try to improve next example in target 100 and he/she knows 21 mark is less to be 100

In the same way we can give a neuron a task and after completing the task by the neuron we will give the grade how accuracy the task is . for example we ask the neuron to calculate 1+9 =?

if it gives 10 than its 100% accurate

if it gives 7 that means it makes mistake . Mistake amount 3 so we can calculate the accuracy and say the neuron that the result is not correct.. You need to add 3 in your result.

If we repeat the process again again the neuron will able to give the exact result…one time.. practice makes it perfect!
So the neuron learn from it mistake .. Calculate the mistake amount … And adjust the calculation next time for greater accuracy. That’s it!

Lets begin

First imports. To train its necessary to add randomness to make the training more effective


from random import uniform, randint
import math

Next Part Weights and Bias


w1 = uniform(0, 1)
w2 = uniform(0, 1)
bias = uniform(0, 1)

Weights
This played a important role in decision making. Adjust and for more improved in result. So weight value changes over time when training.

Bias
The bias is a constant added to the weighted sum of inputs before passing it through the activation function.

the code


from random import uniform, randint
import math

# Initialize weights and biases with random values between 0 and 1
w1 = uniform(0, 1)
w2 = uniform(0, 1)
bias = uniform(0, 1)



# Function to perform training
def train_perceptron():
    global w1, w2, bias, accuracies, epochs_list
    
    # Generate training data
    a_train = [randint(1, 100) for _ in range(1001)]
    b_train = [randint(1, 100) for _ in range(1001)]
    y_train = [a + b for a, b in zip(a_train, b_train)]
    m = len(a_train)
    
    # Number of training epochs and learning rate
    epochs = 10000
    learning_rate = 0.00001  # Reduced learning rate
    
    accuracies = []
    epochs_list = []
    
    # Perform training for specified epochs
    for epoch in range(epochs):
        correct_predictions = 0
        
        for i in range(m):
            # Forward pass
            z = w1 * a_train[i] + w2 * b_train[i] + bias
            prediction = z
            
            # Compute the cost (mean squared error)
            error = (prediction - y_train[i])
            
            # Backward pass (calculate gradients)
            dW1 = error * a_train[i]
            dW2 = error * b_train[i]
            dB = error
            
            # Gradient clipping (optional but recommended)
            max_gradient = 10.0
            dW1 = max(min(dW1, max_gradient), -max_gradient)
            dW2 = max(min(dW2, max_gradient), -max_gradient)
            dB = max(min(dB, max_gradient), -max_gradient)
            
            # Update weights and biases using gradients and learning rate
            w1 -= learning_rate * dW1
            w2 -= learning_rate * dW2
            bias -= learning_rate * dB
            
            # Check prediction accuracy
            if abs(prediction - y_train[i]) < 0.1:  # Adjust threshold as needed
                correct_predictions += 1
        
        # Calculate accuracy for current epoch
        accuracy = correct_predictions / m * 100
        accuracies.append(accuracy)
        epochs_list.append(epoch)
        
        # Print accuracy every 1000 epochs
        if epoch % 1000 == 0:
            print(f"Epoch {epoch}, Accuracy: {accuracy}%")

# Calculate the result based on the trained weights and biases for a specific pair (a, b)
def test_perceptron(a, b):
    z = w1 * a + w2 * b + bias
    return z

# Train the perceptron model
train_perceptron()

# Test the model with specific pairs of numbers
a_test1, b_test1 = 10, 20
result1 = test_perceptron(a_test1, b_test1)

a_test2, b_test2 = 5, 7.9
result2 = test_perceptron(a_test2, b_test2)

# Print the results and final values of weights and biases
print(f"Result (sum of {a_test1} and {b_test1}):", result1)
print(f"Result (sum of {a_test2} and {b_test2}):", result2)
print("Final values - w1:", w1, "w2:", w2, "bias:", bias)



i1=int(input("Enter the first number: "))

i2=int(input("Enter the second number: "))
print(test_perceptron(i1,i2))



Get the code from github

https://github.com/01one/perceptron/blob/main/01_perceptron_sum_of_two_numbers.py