Wednesday, 31 December 2014

Testing Functionality

Due to my project depending on the constant tracking of five objects it's important I find a suitable way of ensuring that all objects remain tracked when the audience interacts with my display. Initially testing of the trackmate tags found that they are hard to detect in the dark and the camera also has trouble detecting them if the tags are printed too small or too large.

The image below is of me testing the tags using the trackmate tracker. From these tests it's seems that the optimal distance from the tags to the camera is about arms length which approximately 2 feet.

However the optimal distance for the smaller tags was much shorter approximately 1 foot. This was an important test as the size of the tags may be significant when I test my project in the foyer space.

An alternative would be remove the need for the user to lift the taggable objects by creating a contraption like this;

The shoebox design basically means the user would need to just slide the objects across a reflective surface as the camera located inside the shoebox would track the tags which would be stuck on the bottom of the objects. Creating a contraption like this would likely be the best method as it would ensure that the objects remain a fixed distance from the camera.

Tuesday, 30 December 2014

Processing Project: Padlock System

I have now been able to implement a padlock system into my sketch, I added this as an additional way of giving the user visual feedback on their progress. There is a total of six versions of the padlock image, the image changes as the user gets closer to completing the sketch. The ways in which I plan to develop this further is by improving the presentation of the sketch as parts of the sketch aren't very easy to see mainly due to colours blending into other colours and overall not looking very attractive which will likely deter people from interacting with it in the foyer.

The padlock system is very similar to the if statement method I used previously when making the background turn green on the puzzle's completion.This time I have implemented this method in increments as seen in the code below.

By coding it this way the background image will change every time boolean variables turn "true" this gives the impression  of a multi part puzzle or a combination lock which is a form of encryption.

Monday, 29 December 2014

Display Project Progress Update

Today I have been able to create a 3 part encryption system made of multiple If statements.  At the moment it's coded so that the background turns green when all three trackable objects are at the correct distances from each other. The way i've coded this is by linking each object to a boolean variable which basically gives the user visual clarification when they have solved a part of the puzzle. This visual clarification would notify the audience that they should move onto the next part of the puzzle.  Here is the source code I used in order to make the screen once all three parts are solved.

At the moment the prototype is only coded so that it measures the distance between objects 1 & 2, 1 & 3 and 1& 4. This results in one object ultimately linking too one.

I plan to make the puzzle more complex by generating more links and measuring the total distances of two objects instead of just one, this would result in the sketch looking much more sophisticated as the movement of one object would affect the distance of other objects. By programming more rules into the sketch I mitigate possible loopholes which could be exploited by the audience and as the primary concept of this piece is encryption it's important there aren't any loopholes.

Friday, 26 December 2014

Cryptography & Steganography

Cryptography is basically the process of scrambling readable information into an unreadable/ fragmented state. It's one of the earliest forms of encryption and was fundamentally designed to ensure that information couldn't be read by anyone other than the sender and the desired  recipient. This is done by ensuring that the sender and recipient share a secret key which is needed to decrypt the data. An example of this is a password for a personal computer, the password is known by the sender (PC) and the recipient (the user), the user has to input this password in order to gain access to the their files. However due to a password having an almost infinite amount of possible combinations this makes it very difficult for a third party who doesn't know the key to gain access.

Steganography is a different type of decryption from Cryptography as a crypto-system doesn't conceal it's true purpose. A cypto system  can be interacted with by anybody however almost impossible to access without the vital key. A Steganography system is different as it aims at concealing the existence of the message all together therefore not attracting the attention of third parties.

 An example of a Steganography encryption method would be writing in invisible ink, the message and method wouldn't be known by third parties however the receiver would have knowledge of the method therefore enabling them to decrypt the message.

Cryptography is the encryption method i'm using for my project as I think a Steganography method  wouldn't be clear to the  audience. The Cryptography method i'm using may seem unfamiliar to the user at first but as they move the trackable objects around and interact with the display they should notice the display reacting to the interaction of the user and as a result understand what they need to do to solve the puzzle. 

The Cryptography method I'm using in my display does consist of a secret key. The secret key consists of five sets of numbers, these numbers can be inputted into the sketch by positioning the trackable objects at certain distances from each other. I believe this cypto-system would be very difficult to decrypt if I didn't provide visual confirmation for each part of the puzzle which is completed.

Wednesday, 24 December 2014

Updated Pseudo Code

Due to almost completing the code mentioned in my previous pseudo code post I believe its important it becomes more complex. Getting the functionality mentioned in the previous pseudo code to work was an important learning experience for me however I personally believe its not sophisticated enough to be challenging for the public.

The new method I'm proposing involves the same "if" statement method used in the previous pseudo code however the statements will be made "true" based on the distances between each object instead of their individual positions on the grid.

This results in the user needing to position objects certain distances between the other objects in order to decrypt the puzzle and as the puzzle will use five objects there are multiple stages to the puzzle therfore making it more complex.

Step 1:  If tracked object #1 is between 140 & 160 distance from object#2 , KeyUnlock1= true

Step 2: If tracked object #2  is between 170 & 195 distance from object#1 , KeyUnlock2= true

Step 3: If tracked object #3  is between 245 & 265 distance from object#1 , KeyUnlock3= true

Step 4:If tracked object #2  is between 155 & 170 distance from object#3 , KeyUnlock4= true

Step 5:If tracked object #4  is between 620 & 660 distance from object#5 , KeyUnlock5= true

Step 6: While KeyUnlock1 = true
      loadImage= Unlocked1.PNG
      loadImage= Locked1.PNG

Step 7: While KeyUnlock2 = true
       loadImage= Unlocked2.PNG
      loadImage= Locked2.PNG

Step 8: While KeyUnlock3 = true
       loadImage= Unlocked3.PNG
      loadImage= Locked3.PNG

Step 9: While KeyUnlock4 = true
       loadImage= Unlocked4.PNG
      loadImage= Locked4.PNG

Step 10: While KeyUnlock1 + KeyUnlock2 + KeyUnlock3 + KeyUnlock4  + KeyUnlock5 = true
            loadImage= Unlocked5.PNG
           Make background image is clear andnot distorted.

Sunday, 14 December 2014

Processing Progress Update

To speed up the code testing progress for this project I've chosen to use a Trackmate simulator which enables me to test the functionality of the code without needing to use my webcam which would take longer however the disadvantages to using a simulator is that it doesn't enable me to test how the display would function with the actual physical objects being moved however I will be sure to conduct these further in the development when i've made significant progress.

The progress i've made with the code is that i've implemented an if statement which causes the background the change colour if the two objects are a certain distance from each other. This distance tracking technique is what i'm going to use as the method of decryption however i'll be creating "if " statements which track distances between four objects instead of just two. This method enables me to increment and decrement the values which improve or hinder the clarity of the background image which can be used by the user to determine if they are getting close to solving the puzzle.

My next step is to implement the image I want encrypted and apply filters to it, once i've done that I will start implementing more if statements.

Monday, 8 December 2014

Project Methodology

The project methodology I will likely adopt in the development of this project is the "Agile Development cycle" which involves quick sprints in development followed by testing. The project would then undergo changes based on the feedback received. This is a repetitive process which is often repeated multiple times before the product is finally released. For my project I will likely produce a functioning version of my project over Xmas and would get relatives and friends to test it and then adapt the project to the feedback received ready for the second iteration of testing after Xmas which would take place in the Foyer space. The feedback and results received from the 2nd iteration should give me a good idea how to optimize my project for the final deadline.

I will keep a log of the development of the project on my blog and go into detail on key elements of the development.

Pseudo Code For Public Display

This is the pseudo code for my public display project, this is a rough plan on how I imagine the project to function.  The sketch will begin with a pointilized background which isn't clear and needs to be distorted by decrypting the puzzle by putting physical objects in the correct order.

Step 1:  If tracked object #1 is in the first quarter of the screen width part1= true

Step 2: If tracked object #2 is in the second quarter of the screen width part2 = true

Step 3: If tracked object #3 is in the third quarter of the screen width part3 = true

Step 4: If tracked object #4 is in the forth quarter of the screen width part4 = true

Step 5: While part1 = true
       play sound = correct.wav
       fill first quarter of screen with yellow

Step 6: While part2 = true

          play sound = correct.wav
          fill second quarter of screen with 50% opacity green

Step 7: While part3 = true

           play sound = correct.wav
           fill third quarter of screen with 50% opacity green

Step 8: While part4 = true

          play sound = correct.wav
          fill forth quarter of screen with 50% opacity green

Step 9: While part1 + part2  + part3 + part4 = true

            play sound = complete.wav
            pointillize image with smaller dots so image is clear and
            not distorted.

Design Adaptation To Enviornment

Similar to a post I did earlier where I talked about how objectives influences the design of the project, in this post i'll be writing about the impact the environment has on the design of the project. In the requirements gathering stage of development I learnt about the space the project will be located in as well as the behaviors of people interacting and passing through  the space. From that research I conducted I concluded that the coffee area was of most interest to the people in the space so it will be wise of me seek a screen in that area or at least in view from the Costa area. However during peak times such as lunch times this area does get quite busy meaning that people may not have enough space to interact with my interactive display or might prefer not to perform in front of large volumes of people.

With this in mind i'm going to aim to make my interactive display consume as less space as possible as this would likely mitigate glitches which may occur due to large volumes of  people walking in front or behind the user as my design is supposed to be operated by one user but also be spectated by people out of view of the camera.

The lighting in the space is also quite dark meaning that I will likely need to provide additional lighting for my display to enable the camera to track the traceable objects which would be rather small. When testing the "Trackmate" tags I noticed the camera did have trouble detecting the tags when I was either too close or too far away from the camera as it looks like the Lucid library only expects to detect the tags when they are a fixed  size. As a result I will likely need to highlight an area on the ground for people to stand on to ensure users stay the correct distance from the camera which will mitigate some errors.   

Follow Up Of Previous Post

In my previous post I stated that some of the new features available in the reality 4 plugin would make it relatively easy to produce the special effects seen in the "Die Another Day" and "Girl With The Dragon Tattoo" intro sequences. In the examples seen below I used gold metal texture for one of them and a mirror glass texture for the other.

Replaced the default skin map with a hammered gold texture

Replaced the default skin map with a mirror texture

The IBL lighting method has worked really well with the use of these textures as the background colours are reflecting off the texture. The mirrored texture has produces a pretty effective camouflage effect similar to that seen in the "Predator" Movie however I would need to adjust the transparency to produce something more accurate but it's feasible.

The ways I could develop my 3D modelling would be to learn how to produce my own IBL images which I could use on my renders as this lighting method seems to produce great results.

Sunday, 7 December 2014

IBL Lighting VS Mesh Lights

Due to the recent release of a new DAZ3D plugin called Reality 4 I have been testing out some of the new features such as applying material presets to 3D objects to make them look more realistic. Prior to the release of this plugin every 3D object in a scene used an image map texture which basically means that a 3D mesh was wrapped with an image. With the new plugin 3D objects can now have textures such as marble wood and water applied to them which enables digital designers to create quite amazing textures. 

This feature makes it possible to apply elemental effects to 3D figures much like the effects used in  the "Girl with Dragon Tattoo" & "Die Another Day" opening sequences.

Girl With The Dragon Tattoo

Die Another Day
In the two renders below I used the same model and camera angle however the lighting method is different. One of the renders uses mesh lighting which is basically a custom made light projected at the model with the intensity and similar to that of sunlight. The 2nd method uses IBL lighting which is a lighting method where a panoramic  image is wrapped around the scene and the 3D scene adopts the light sources of the panoramic image. This technique is useful when rendering reflective surfaces such as metal and water perfect for rendering cars etc. I conducted a fair test on both lighting techniques as both of the renders were given 1 hour each to render, the results can be seen below

Here is the IBL image used for render 2:

Render 1: Mesh Lighting

Render 2: IBL Lighting

You can see that the blue of the sky in the IBL image is cast on the model's white hair and sunlight is only casting on her left arm and the rest of her is shaded from the sun. IBL does produce a more realistic 3D scene however I would argue that IBL lighting does produce much more noise in the render which overall hinders the image clarity/quality of the render compared to the mesh lighting method used in render 1.

Tuesday, 2 December 2014

FaceGen Character Creation Tutorial


In this tutorial I will guide you through how to create a 3D custom character using FaceGen and DAZ3D.


Reality 2.0 (For DAZ3D renders)

1.    Open FaceGen Modeller

2.    Select “PhotoFit” from the menu

 3.    Click “Next”  and then click Load where the frontal image is displayed

  4.    Select a relatively high quality frontal face image of your character

5.    Once you have selected your image click next

6. Now you need to assign feature points to the correct parts of your character’s face. During this step you only need to place the feature points roughly where their supposed to go as you will get another opportunity to perfectly place the feature points once you click next

 7.    You can now see that your frontal image has zoomed in. This gives you the opportunity to place the feature points more accurately. This is useful if you’re working with larger images such as a full body image.

8.    Once you are happy with the placement of the feature points click next. During step 4: it’s recommended that you tick the preserve facial hair box when creating Male characters.

9.    Your character should now be generated. From personal experience the higher the quality of the frontal image the better the overall results. 

10.    You now need to save the face, click on File > Save As. Ensure you save it somewhere you can find it.

11.    Once the face is saved you now need to open up FaceGen Exporter

12.    FaceGen Exporter is very simple all you need to do is click on the large dotted button and find the face you just saved. (Should be a FG file) Also enter a name for your morph (face). Then click “Create export files”.

13.    Once you click “Create Export Files” you will be displayed with the following instructions which you will refer to later.

14.     Now open DAZ3D

15.    You need to go to “Content Library” at the right of the screen and find “Figures”.

16.    You now want to find the “DAZ People” dropdown menu  and double click on “Basic Male”

17.     The model should then load onto the stage.

18.    You now need to right click onto the model’s face and select “Genesis”:”Head”.

 19.    With the head now selected you need to go the “Shaping” panel located at the left of the screen.

20.    With the shaping panel selected you now need to click on “Head” which is under the “Genesis” panel.

21.    Scroll down until you find the name of the morph you named in FaceGen Exporter.

22.    Swipe the slider to the right so its value is 1.00. The character should now have the head shape of your character.

23.    We now need to apply the face texture to the model, to do this we need to click on the “Surface (Color)” panel

24.    Click on the “Genesis” panel and you should see the following. Ensure that all sections are highlighted in yellow as seen in image below and change the UV Set to “Genesis Male”

 25.    You now need to find the Daz3D textures folder in its install directory. (Here’s my directory)

26. Open the folder for you character and you should have something like this.

 27.    Apply the textures as recommended by FaceGen Exporter

For example all of the yellow sections are numbered 1 so they would use the texFace texture.

28; Your character should look similar to that of the one in the image below.

29.    You can right click on the character and select genesis to highlight the whole body, you can now edit the body in the shaping panel to change the body shape.

30.    You can now animate or pose your character using the “Pose & Animate” panel or add clothing and hair etc. You can also export your figure to Zbrush to add details or simply export it as .obj and work with it elsewhere.

Useful Video for using DAZ3D characters in Unity

Reality 2.0 Render