I’m currently developing an Android application as a part of my 6th (and final) semester of the Medialogy bachelor. The application tries to recommend bars (and other nightlife venues) based on the user’s friends who are currently logged in at the bars via Facebook Places. To make the recommendation the application looks at your Facebook relationships with these friends and from there tries to determine which of those friends you’d rather hang out with. The application is finally running on a smartphone, and I thought I’d post a couple of screenshots. At this point, there is no graphical elements implemented, so it probably does not look like much.
All posts in “CIT”
As you might’ve noticed I have updated the design of the blog pretty heavily. I don’t think I’ve ever been so satisfied with the look of the site before, so to celebrate it I thought I’d post a small entry about the paper I wrote during my 5th semester of Medialogy in the fall of 2010. While the semester was mainly spent developing an iPhone application, the purpose of the paper was to improve the user experience of a locative media by using lighting that simulated the real lighting at the user’s location, i.e. we made an augmented reality application in which the weather was mimicking the current weather in the real world. We basically concluded that since we did the testing in December, the average lighting differences throughout the day were too small to give us any meaningful results. The paper is written in LaTeX though, which makes it all hot and juicy.
Take a look for yourself right here.
I’m now on the 5th semester of Medialogy, and we’re currently developing an iPhone application. The application lets the user use the iPhone as a camera that looks back in time, i.e. if you’re standing at Christiansborg in Copenhagen and point the iPhone at the building you should see how the area looked 500 or 1000 years ago. We also wanted this environment to be interactive, i.e. the user should be able to walk and look around. This means that we wanted to model a 3D environment of the area and synchronise the position of the user in real life to the position in this 3D environment, and also allow the user to use the iPhone as virtual reality goggles, i.e. if you tilt the iPhone upwards you see the sky, if you turn around you see what’s behind you in the 3D world, etc.
We’ve been making the models in Maya and building the application itself in Unity, since it has a pretty strong iOS support. All scripts in Unity are programmed in C#.
Basically the application will have the following features:
- User can move around by physically moving (Assisted GPS).
- User can look around by using the iPhone as a camera into another world (Gyroscope).
- Weather conditions in 3D environment mimic real-life weather conditions (Network connection to DMI).
- Position of sun is calculated based on current date and time.
- Lighting conditions are calculated by using online information about solar radiation.
- A graphical user interface allows the user to switch between time periods.
The application is currently running on my iPhone 4, but I have no video of it, so here’s a screenshot from Unity where one of the newer editions of Christiansborg is displayed.
We do have some screenshots from the application, though:
This is from an early build, no textures on the buildings or anything, but the gyro works, i.e. you can look around the environment by turning and moving the iPhone.
This is from the newest build. The model is the Christiansborg castle which was there before the current one. The image is dark because the sun’s location is calculated from the real sun, and it’s gone down at this point. The arrows allow you to switch between castles.
I just started on the 5th semester of Medialogy which focuses a lot on 3D-modeling in Maya, and I’d like to show you my first 3D models! Our assignment was to make a town square with a statue or a fountain, and I tried to make a forest village, kind of like the one the Ewoks live in. So, without further ado:
This is a write-up of this video, mostly so I don’t have to watch the video every time I want to mesh render something.
- Go to “Windows” -> “Rendering Editors” -> “Hypershade”.
- Create a new Lambert shader, and open its attributes.
- Set “Color” to black.
- Set “Diffuse” to 0.
- Right-click-hold on shader to assign it to all objects.
- Click “Output” icon in shader attributes (arrow in a box next to “Presets”.
- Open “mental ray”, and “Contours”.
- Tick on “Enable Contour Rendering”.
- Set “Color” to white.
- Set “Width” to 0.25 or 0.5.
- Go to “Windows” -> “Rendering Editors” -> “Render Settings”.
- Set “Render Using” to “mental ray”.
- Go to “Features” tab, and open “Rendering Features”.
- Set “Primary Render” to “Raytracing” under “Rendering Features”.
- Set “Secondary Effects” to “Raytracing” under “Rendering Features”.
- Open “Contours” in the same tab.
- Tick on “Enable Contour Rendering”.
- Set “Over-Sample” to 3.
- Set “Filter Type” to “Gaussian Filter”.
- Open “Draw By Property Difference” in the same tab.
- Tick on “Around all poly faces”.
- Render your image, and you should see something like this: