One company prepared 10 great version of the same Ad. they are not very sure which Ad is suitable to put it on the social network. They want to put the Ad that will get the maximum clicks and lead to the best Conversion Rates.

The company would like to hire a data scientist to find the best strategy to find out which version of the Ad is the best for the user and which version of the Ad will lead us to the highest Conversion Rate.

The aim of this project is to find Ads that will get the most clicks. We implemented Upper Confidence Bound in Python.

let’s get our environment ready with the libraries we’ll need and then import the data!

```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
```

Check out the Data

```
# importing the dataset
df = pd.read_csv('~/DataSet GitHub/UCB/Ads_CTR_Optimisation.csv')
df.head()
```

`df.info()`

The data contains 10 thousands of users interaction with Ads in social network.

In this project, We are going to observe if the user clicks yes or no on the Ad.

In Upper Confidence Bound (UBC) , If the user clicks on the Ad It will gives us one reward and if the user doesn’t click on the Ad It will gives us zero reward.

The key thing to understand about Reinforcement Learning is that this strategy will depend on each round on the previous results we observed at the previous rounds.

.

### Implementing UCB Algorithm on Dataset

In this step, we are going to implement Upper Confidence Bound on the dataset to find out which version of the Ad will get the more clicks.

```
import math
N = 10000
d = 10
ad_selected = []
num_of_selection = [0] * d
sum_of_reward = [0] * d
total_reward = 0
for n in range(0, N):
ad = 0
max_upper_bound = 0
for i in range(0, d):
if (num_of_selection[i] > 0):
average_reward = sum_of_reward[i] / num_of_selection[i]
delta_i = math.sqrt(3/2 * math.log(n + 1) / num_of_selection[i])
upper_bound = average_reward + delta_i
else:
upper_bound = 1e400
if upper_bound > max_upper_bound:
max_upper_bound = upper_bound
ad = i
ad_selected.append(ad)
num_of_selection[ad] = num_of_selection[ad] + 1
reward = df.values[n,ad]
sum_of_reward[ad] = sum_of_reward[ad] + reward
total_reward = total_reward + reward
```

.

Visualising UBC Result

In this stage, we will

```
plt.figure(figsize=(12,8))
plt.hist(ad_selected, color = 'orange')
plt.title('Histogram of Ads Selection')
plt.xlabel('Ads')
plt.ylabel('Number of Times each Ads was Selected')
plt.show()
```

Now we can see that Ad number 4 was most selected by users and has a higher Conversion Rate. There is no doubt that is the Ad we need to show to our users for the marketing campaign.

.

### Implementing the Thompson Sampling Algorithm on Dataset

In this step, We are going to implement another Reinforcement Learning algorithm called Thompson Sampling on the dataset to find out which version of the Ad will get

```
# Implementing Thompson Sampling
import random
N = 10000
d = 10
ads_selected = []
numbers_of_rewards_1 = [0] * d
numbers_of_rewards_0 = [0] * d
total_reward = 0
for n in range(0, N):
ad = 0
max_random = 0
for i in range(0, d):
random_beta = random.betavariate(numbers_of_rewards_1[i] + 1, numbers_of_rewards_0[i] + 1)
if random_beta > max_random:
max_random = random_beta
ad = i
ads_selected.append(ad)
reward = df.values[n, ad]
if reward == 1:
numbers_of_rewards_1[ad] = numbers_of_rewards_1[ad] + 1
else:
numbers_of_rewards_0[ad] = numbers_of_rewards_0[ad] + 1
total_reward = total_reward + reward
```

.

Visualising Thompson Sampling Result

In this stage, we will

```
plt.figure(figsize=(12,8))
plt.hist(ads_selected, color = 'pink')
plt.title('Histogram of ads selections')
plt.xlabel('Ads')
plt.ylabel('Number of times each ad was selected')
plt.show()
```

As we can see the Thompson Sampling figured out which Ad is the best to select and has the best CTR. Ad number 4 was most selected by users and has a higher Conversion Rate.