In mainly visual information is encrypted using encryption

In Visual cryptography ,the technique was proposed by Moni
Naor and Adi Shamir in 1994, mainly visual information is
encrypted using encryption algorithm but here there is no
need of decryption algorithm to reveal the visual information.
Here the decryption process is done simply by human visual
system. During the encryption process we simply add some
noise in the original image to hide the original information
and during the decryption process we reduce the noise to
unhide the original information. Visual Cryptography uses
two transparent images. An image is broken up into n shares
and someone with all n shares could decrypt the image, while
any n-1 shares revealed no information about the original
image. Each share was printed on a separate transparency, and
decryption was performed by overlaying the shares. When all
n shares were overlaid, the original image would appear. One
image contains random pixels and the other image contains
the secret information. It is impossible to retrieve the secret
information from one of the images. 2
The original secret image can be recovered by superimposing
the two share images together. The secret image is composed
of black and white pixels. The underlying operation of such a
scheme is the logical operation OR. Generally, a(k, n)-VCS
takes a secret image as input, and outputs share images that
satisfy two conditions: First, any k out of n share images can
recover the secret image; second, any less than k share images
cannot get any information about the secret image. Similar
models of visual cryptography with different underlying
operations have been proposed, such as the XOR operation
introduced in 2–6, and the NOT operation introduced in
7, which uses the reversing function of the copy machines.
In a VCS, there is a secret image which is encrypted into
some share images. The secret image is called the original
secret image for clarity, and the share images are the
encrypted images (and are called the transparencies if they are
printed out). When a qualified set of share images
(transparencies) are stacked together properly, it gives a visual
image which is almost the same as the original secret image;
It is called the recovered secret image. In the case of black
and white images, the original secret image is represented as a
pattern of black and white pixels. Each of these pixels is
divided into subpixels which themselves are encoded as black
and white to produce the share images. The recovered secret
image is also a pattern of black and white subpixels which
should visually reveal the original secret image if a qualified
set of share images is stacked. In this paper, we will focus on
the black and white images, where a white pixel is denoted by
the number 0 and a black pixel is denoted by the number 1.
The easiest way to implement Visual Cryptography is to print
the two layers onto a transparent sheet. When the random
image contains truly random pixels it can be seen as a Onetime
Pad system and will offer unbreakable encryption.
Table 1. Basic Encoding Idea in Naor and Shamir’s
Naor and Shamir’s4 proposed encoding scheme to share a
binary image into two shares Share1 and Share2. If pixel is
white one of the above two rows of Table 1 is chosen to
generate Share1 and Share2. Similarly If pixel is black one of
the below two rows of Table 1 is chosen to generate Share1
and Share2. Here each share pixel p is encoded into two white
and two black pixels each share alone gives no clue about the
pixel p whether it is white or black. Secret image is shown
only when both shares are superimposed. Various parameters
are recommended by researchers to evaluate the performance
of visual cryptography scheme. Naor and Shamir 4
suggested two main parameters: pixel expansion m and
contrast ?. Pixel expansion m refers to the number of
subpixels in the generated shares that represents a pixel of the
original input image. It represents the loss in resolution from
the original picture to the shared one. Contrast ? is the relative
difference in weight between combined shares that come from
a white pixel and a black pixel in the original image.
Input a colored image which should be in rgb color
model.Then split the image in CMY model.The purpose of
using CMY is in printers usually CMY model is used.
Because the subtractive model is more suitable for printing
colors on transparencies, we will use the CMY model to
represent colors in what follows. Because (R, G, B) and (C,
M, Y) are complementary colors, in the true color model, (R,
G, B) and (C, M, Y) possess the following relationships: C =
255?R, M = 255?G, Y = 255?B: Thus, in the (C, M, Y)
representation, (0; 0; 0) represents full white and (255; 255;
255) represents full black. So here first we will split RGB
spaces in original model. Then using following equation RGB
is converted to CMY model. It is implemented in matlab 6.1.
c = 1-(double(r) /255);
m = 1-(double (g)/255);
y = 1-(double (b)/255);
Then we applied halftone algorithm on this three images
separately. There are many halftone algorithms. Here I have
used floyd’s algorithm. This gives three halftoned images for
each Cyan, Magenta and Yellow. Here each pixel is compared
against threshold (T=127) and if intensity is greater than T
make it 255 else 0.
* 8/24 0
2/24 4/24 0 0 2/24
1/24 0 4/24 2/24 1/24
Coefficients’ of proposed method
The black spot represents current pixel, which is being
threshold. The filter coefficients in Error Diffusion filter are
indexed relative to the current pixel, which determines what
percentage of quantization error, is to pass to pixel at that
position, relative the current pixel.
1. Read encrypted colored image shares.
2. Apply Halftone algorithm on cover images. Read different
cover images for each shares.
3. Replace half 1’s of cover image with respective share. And
all zeros of cover image with respective share.
4. Final share is to be shared with participants.
At decryption end only the shares are to be stacked together
no other computation is required to reconstruct the image
This method is Modified Visual Cryptography scheme for
colored images.
Figure 1. Generating Shares of colored images
The generated halftoned shares are stamped with the cover
image. These cover images can be different for different
shares and it can be same for all the shares. Procedure is
shown in following flowchart.
Figure 2. Stamping procedure
Share Share
Halftone Share Halftone Share
Stamp Cover Stamp Cover
Final Share 1 Final Share 2
Algorithm for Encryption VCS
Step 1: Input the halftoned image with secret image.
Step 2: Initialize two collections of n x m Boolean matrices
S0 and S1. S0 acts as a pool of matrices from which to
randomly choose matrix S to represent a white pixel while S1
acts as a pool of matrices from which to randomly choose
matrix S to represent a black pixel..
The constructions can be clearly illustrated by a 2 out of 2
visual cryptographic scheme.
Step 3: Using the permutated basis matrices, each pixel from
the secret image will be encoded into two subpixels on each
participant’s share. A black pixel on the secret image will be
encoded on the ith participant’s share as the ith row of matrix
S1, where a 1 represents a black subpixel and a 0 represents a
white subpixel. Similarly, a white pixel on the secret image
will be encoded on the ith participant’s share as the ith row of
matrix S0.
Algorithm for stamping Cover image (Proposed)
1. Read encrypted colored image shares.
2. Apply Halftone algorithm on cover images. Read different
cover images for each shares.
3. Replace half 1’s of cover image with respective share. And
all zeros of cover image with respective share.
4. Final share is to be shared with participants.
The complexity is linearly dependent on the number of if/else
conditions, as there are no other operations performed in the
algorithm. Regarding the existing algorithm, as it has 4
patterns, there are 4 operations for each pixel.
1. Decomposing in CMY
2. Halftoning
3. Pixel split for Encryption
4. Merging C,M,Y shares
If we let n be the number of pixels, then the complexity of the
existing method is 4n. Subsequently, in this method, there are
14 if/else conditions as there are more specific comparison are
being done in the process, so the complexity of the method
becomes as 14n.
There are basically two approaches for image Quality
1. Objective measurement
2. Subjective measurement
1. Subjective measurement
A number of observers are selected, tested for their visual
capabilities, shown a series of test scenes and asked to score
the quality of the scenes. It is the only “correct” method of
quantifying visual image quality. However, subjective
evaluation is usually too inconvenient, time-consuming and
2. Objective measurement
These are automatic algorithms for quality assessment that
could analyse images and report their quality without human
involvement. Such methods could eliminate the need for
expensive subjective studies. Objective image quality metrics
can be classified according to the availability of an original
(distortion-free) image, with which the distorted image is to
be compared.
(i) Mean Squared Error (MSE): One obvious way of
measuring this similarity is to compute an error signal by
subtracting the test signal from the reference, and then
computing the average energy of the error signal. The meansquared-
error (MSE) is the simplest, and the most widely
used, full-reference image quality measurement.
This metric is frequently used in signal processing and is
defined as follows:
Where x(i, j) represents the original (reference) image and y(i,
j) represents the distorted (modified) image and i and j are the
pixel position of the M×N image.MSE is zero when x(i, j) =
y(i, j) .
ii) Peak Signal to Noise Ratio (PSNR): The PSNR is
evaluated in decibels and is inversely proportional the Mean
Squared Error. It is given by the equation :-9
Below table shows the image quality measurement of
reconstructed image. Since the original and reconstructed
image has different aspect ratio with difference in image size
,first we need to resize the image to make both image equal
size.There are two options to do so.
1) Resize Original image to reconstructed one
2) Resize reconstructed to Original one
Here we opted second option.
The Result is shown in below table. It shows that with
proposed low complex stamping algorithm the reconstructed
image contrast is decreased but XOR-based VCS for colored
image gives better visibility.MSE and PSNR values show the
low quality of picture because the encrypted image size is
changed due to proposed method of VCS.
Fig: 3 Result of Proposed method with meaningful shares
The proposed method for stamping a cover image is very
simple and easy compare to other algorithms but at the
decryption end the contrast is degraded. Proposed method
gives better quality of reconstructed image without cover
image. This work is highly applicable in military field.
It works good for softcopy decryption but hardcopy
decryption degrades the contrast of reconstructed image. For
stamping algorithm the size of cover image and shares must
be same. So it requires resizing the cover image. Pixel
expansion is double the breath of original image but height is
the same. One pixel is splitted in two pixels instead of four
which decreases the size of share.Better security because the
constructed shares generates random like dots so the intruders
won’t get any clue of the original image.