Individual reflections - Kristóf Lénárd

I feel that I have gained a lot from this course. I have learned things about different types of AR, both marker-based and markerless, and have learned about VR. I have learned when to use it, how to develop apps for these, what makes these different from “regular” apps, and why is it a very quickly developing and growing field.

During development, I have contributed to the marker-based AR app in the development of the code, contributed to the markerless AR app in the AR code and model creation, contributed to the VR app by developing one of the levels and in the sound design of the escape room game, and contributed to the XR app by helping the others, both in code and in general Unity development, along with adding a few bits of models, code, and sound design. Sadly, for the XR project, the main feature I attempted to develop, that is, multiplayer experiences in VR, has failed, as the package I used to develop networking was not compatible with the Oculus XR plugin.

In the following sections, I wish to detail these more, and to reflect upon why the technical knowledge I gained from each project can be useful to me, especially in the field of game development.

 

Firstly, I would like to talk about marker-based AR, and what I learned about it. It is, as we know, a technique wherein AR experiences are tied to markers, such as unique images, and played when some input of the AR device corresponds to the marker – such as the camera capturing the image. Of course, this renders the AR experience not very flexible – if, for example, the marker is large or immobile, it can only be experienced in a set location. This can, however, even be useful – if the AR experience is centered around a set location, this may very well offer a more stable and easier to plan experience than markerless AR. Our project, which had the concept of displaying animals moving around, based on images in a children’s book, has used this, by fixing the book as a point – which is easy to move, and the images offer well-differentiable markers. Sadly, we encountered many difficulties during development, especially with the Vuforia package that we used, which didn’t recognize our images very well, and had problems merely adding it to Git as well. However, all in all, this can be a useful technique, but its limitations have to be considered. 

 

Markerless AR is also an AR technique, but instead of using set markers, such as images, it uses what I call “contextual markers” (making the name kind of a misnomer), such as location, environmental features, motion – there are truly many kinds. There, it finds feature points, and calculates based on these. Our project here was an app, that displayed doors, hiding a famous location, useful in geography lessons or history lessons – as the project required an educational focus. Here I developed the models and how Unity displayed them, along with contributing other code. It is a much more flexible thing than marker-based AR, and our solution can be expanded by simply adding more prefabs, as the door code doesn’t change – merely the inner model does. This makes it suitable for educational use with further development, as anything can be displayed. 

 

The third project was a VR project, using the Oculus Quest headset. To me, this was the most exciting technology we learned – projecting a whole virtual world, instead of projecting virtual objects into reality. Here we developed an escape room game, where the player has to solve VR puzzles, aided by a person in real life, to escape. My puzzle was the sound-based one, where the player has to find the source of a sound, interact with it, then interact with something else in the room. This offered us an exploration into sound in VR, which requires different things than regular game sound. An example would be spatialization. As VR is always a surround environment, it requires precise sound directions for the best experience. The spatializer plugin, using a head-related transfer function (a mathematical function describing the sound output, based on the relative position and orientation of the head and the sound source), converts normally monophonic sounds into a directional sound. This is supported by a distance modelling section, which uses things such as loudness or reverb to make the sound more realistic distance-wise. Applying both therefore produces a realistic sound that can be used for both direction and distance – which the puzzle built on.

 

For the fourth project, we developed a VR arcade with minigolf and a racing game. Sadly, my part, the networking between multiple VR players, has been unsuccessful (as I mentioned before), but I was able to contribute by helping to fix issues, such as hands not being able to grab some things, adding more realistic audio, and contributing to the code of others by offering solutions to some of their problems. As this was a VR project too, the underlying fundamentals were the same, but this offered us a chance to play around a bit more with the many, many features the Oculus development kit has, such as hand tracking or teleportation.

 

Overall, I feel I’ve gained a lot from this course, and since both AR and VR are emerging technologies, I daresay I will be able to use them well in the future – especially if, as I hope, I can get into the game development field.

Kommentarer

Populære opslag fra denne blog

XR project final part and XR Expo

Individual reflections - Christian