Photo by Towfiqu barbhuiya on Unsplash
Image Steganography Using Machine Learning
Cryptography | Machine Learning | Deep Learning
Image steganography is a technique for concealing secret information within digital images without altering their visual appearance. The concept behind image steganography is to use the redundancy of the image data to embed the hidden message in such a way that it is undetectable to the human eye. The resulting steganographic image can be transmitted via the internet or any other digital medium, providing a secure and inconspicuous way to communicate sensitive information
Encrypting text messages into images and communicating them can be a useful technique to enhance the security and privacy of communication. By doing so, it becomes more difficult for an unauthorized third-party to intercept or access the message contents, as they would need knowledge of the encryption algorithm used and the ability to extract the hidden message from the image.
This machine learning project performs image steganography by encrypting the message into an image. This encrypted data can then be extracted from the image.
First, we will import all the required libraries.
#Importing all the necessary libraries
import numpy as np
from tensorflow.keras.layers import *
from tensorflow.keras.losses import *
from tensorflow.keras.callbacks import *
from tensorflow.keras.models import *
from tensorflow.keras.metrics import *
from tensorflow.keras.preprocessing.image import img_to_array, load_img
from matplotlib import pyplot as plt
Next, we define the required functions.
1) "rand_img(size)" function: This function generates a random image of a given size. The resulting array can be imagined as an image with pixel values that have random shades of gray.
def rand_img(size):
return np.random.randint(0, 256, size) / 255.0
2) "rand_sentence(len, max)" function: This function generates a random sentence of a given length. This function does not perform any processing on the generated sentence, such as converting the integers into actual words or adding any grammatical structure. It simply creates a sequence of random integers that can be used to represent a sentence in some way.
def rand_sentence(len, max):
return np.random.randint(0, max, len)
3) "onehot(sentence, max)" function: This function simply performs one-hot encoding on a given sentence.
def onehot(sentence, max):
onehot = np.zeros((len(sentence), max))
for i, v in enumerate(sentence):
onehot[i, v] = 1
return onehot
4) "data_generator(image_size, sentence_len, sentence_max_word, batch_size=32)" function: This function is used to generate training data for a neural network that takes as input both images and sentences. The generator function produces batches of training data on-the-fly, rather than pre-loading all data into memory, which can be useful for training with large datasets.
The function uses an infinite loop (using the "while True" statement) to continuously generate new batches of training data. Each batch consists of four numpy arrays: "x_img", "x_sen", "y_img", and "y_sen". The "x_img" and "x_sen" arrays contain the input images and sentences for the network, while the "y_img" and "y_sen" arrays contain the corresponding output images and sentences that the network should learn to generate.
The "yield" statement is used to return the numpy arrays as a tuple of input and output data for the network. The generator function can be used in a training loop to provide batches of training data to the network.
def data_generator(image_size, sentence_len, sentence_max_word, batch_size=32):
while True:
x_img = np.zeros((batch_size, image_size[0], image_size[1], image_size[2]))
x_sen = np.zeros((batch_size, sentence_len))
y_img = np.zeros((batch_size, image_size[0], image_size[1], image_size[2]))
y_sen = np.zeros((batch_size, sentence_len, sentence_max_word))
for i in range(batch_size):
img = rand_img(image_size)
sentence = rand_sentence(sentence_len, sentence_max_word)
sentence_onehot = onehot(sentence, sentence_max_word)
x_img[i] = img
x_sen[i] = sentence
y_img[i] = img
y_sen[i] = sentence_onehot
yield [x_img, x_sen], [y_img, y_sen]
5) "get_model(image_shape, sentence_len, max_word)" function: This function defines a Keras model for a neural network that takes both images and sentences as input, and produces both reconstructed images and reconstructed sentences as output. The model has two main components: an encoder model that processes the input images and sentences and produces the output images, and a decoder model that takes the output images from the encoder and produces the output sentences.
def get_model(image_shape, sentence_len, max_word):
input_img = Input(image_shape)
input_sen = Input((sentence_len,))
embed_sen = Embedding(max_word, 100)(input_sen)
flat_emb_sen = Flatten()(embed_sen)
flat_emb_sen = Reshape((image_shape[0], image_shape[1], 1))(flat_emb_sen)
trans_input_img = Conv2D(20, 1, activation="relu")(input_img)
enc_input = Concatenate(axis=-1)([flat_emb_sen, trans_input_img])
out_img = Conv2D(3, 1, activation='relu', name='image_reconstruction')(enc_input)
decoder_model = Sequential(name="sentence_reconstruction")
decoder_model.add(Conv2D(1, 1, input_shape=(100, 100, 3)))
decoder_model.add(Reshape((sentence_len, 100)))
decoder_model.add(TimeDistributed(Dense(max_word, activation="softmax")))
out_sen = decoder_model(out_img)
model = Model(inputs=[input_img, input_sen], outputs=[out_img, out_sen])
model.compile('adam', loss=[mean_absolute_error, categorical_crossentropy],
metrics={'sentence_reconstruction': categorical_accuracy})
encoder_model = Model(inputs=[input_img, input_sen], outputs=[out_img])
return model, encoder_model, decoder_model
6) "ascii_encode(message, sentence_len)" function: The function creates a numpy array with all zeros and then sets the elements corresponding to the ASCII codes of the characters in the message to the respective ASCII codes.
def ascii_encode(message, sentence_len):
sen = np.zeros((1, sentence_len))
for i, a in enumerate(message.encode("ascii")):
sen[0, i] = a
return sen
7) "ascii_decode(message)" function: The function decodes the message by converting the one-hot encoded representation of the characters back into their ASCII representation.
def ascii_decode(message):
return ''.join(chr(int(a)) for a in message[0].argmax(-1))
By using these functions appropriately, we can perform image steganography effectively.
In conclusion, the system described above is a simple implementation of an image-text fusion system using a convolutional neural network and a recurrent neural network. While this implementation is relatively simple, it demonstrates the potential of image-text fusion in various applications, such as steganography and multimedia communication.