1 month into my summer vacation, and I have been spending time doing research on ecological modelling, wrapping up work at Code Gakko and doing a side project on scraping and representing MOE data. With 2 more months left, I have decided to start a series on my capstone topic, so that I may document the research I’m doing, and make it more robust. From time to time, I will also put up my thoughts regarding Code Gakko as well as my side project as and when I make some progress.
The most important question out there, that Maurice asked me:
What do you want to learn?
After all, the capstone is a 10MC subject.
Choose one approach, master it.
After much thought, I want to learn 2 things:
To know how agent-based models work, what their limitations are, and finding out if there is a scientific way of going about working with ABMs
How to create a webapp for this model, for educational and research purposes
So far, this is what I have, just a little plan that outlines the knowledge I require to do my capstone. One advice I received from my advisor Professor Gastner is that many students don’t do their literature review. What is the literature review anyway? It is a process by which you look at all the related work that has been done out there, and knowing how your work is situated in the greater literature of academic research.
In my context, I am doing an ecological model that simulates tropical forests for chronosequences of more than 100 years. I am investigating how the biodiversity of a landscape can be affected by different parameters, such as dispersal limitation, functional traits of species etc.
Thus the relevant questions / content I need to look out for are as such:
Why is biodiversity important?
What are current measures of biodiversity?
What are the theoretical factors that affect biodiversity?
Why is there a need for a model in the first place?
What assumptions am I making in my model?
How do I choose the parameters for my model?
Is there a mathematical basis for the model?
I have sought out a few resources to kickstart this research process and they are as follows:
Concepts of Biodiversity / Community Ecology:
Colwell, R. K. (2009). Biodiversity: concepts, patterns, and measurement. The Princeton Guide to Ecology, 257–263.
Semeniuk, C. A., Musiani, M., & Marceau, D. J. (2011). Integrating spatial behavioral ecology in agent-based models for species conservation. Edited by Adriano Sofo, 1.
Grant, W. E., & Swannack, T. M. (2008). Ecological modeling: a common-sense approach to theory and practice. Malden, MA ; Oxford: Blackwell Pub.
Gimblett, H. R. (Ed.). (2002). Integrating geographic information systems and agent-based modeling techniques for simulating social and ecological processes. Oxford ; New York: Oxford University Press.
Botkin, D. B. (1993). Forest dynamics: an ecological model. Oxford ; New York: Oxford University Press.
Jørgensen, S. E., & Bendoricchio, G. (2001). Fundamentals of ecological modelling (3rd ed). Amsterdam ; New York: Elsevier.
This year, I’m doing research with my professor on model checking and verification. More specifically, analyzing an open problem by Thomas Schelling. The open problem is to do come up with a more formal analysis and precise formulation of certain criteria that the Schelling model tries to fulfill. To put it in simple terms, the Schelling model is about racial segregation and whether people of different types will mix evenly if they even have a slight preference for their own race.
But first, we need to think broader. In the broader perspective, having a more precise formulation is really important. That’s because there are many systems that are probabilistic, and being able to describe them precisely helps other stakeholders involved to make more informed decisions. For example, the software glitches in Toyota Prius caused 185,000 cars to be recalled when good model checking could be used to make sure the system doesn’t mess up.
The best thing I’ve read was the comparison between model checking and testing. Testing is to verify the presence of errors, and not their absence. More precisely, when we test, there are errors that we may not be able to catch (because it requires prior knowledge of what errors could happen), and that requires model checking to first consider all possible ways the system could end up in, and formally ensure that it fulfills the specifications of its task to the degree we want it to. This is really hard to do mechanically and so we rely on math to rigorously verify the models we use.
Now what entails model checking? First, coming up with a way to model a certain problem. There are many techniques to approach this, but the one I’m trying out is transition systems, or how a system would progress from one point to another, considering all the possible states a system could move into. Next is to come up with certain properties that I want the model to fulfill. There are several terminology for properties, for example, one basic property being reachability. What is the probability of an algorithm terminating successfully / error occurring during execution? Another property is long-run behavior for example. Does a system oscillate between two states in the long run, or is there a limit to changes made to the system?
And so there are a few ways to go about doing this. To compute reachability properties, compute as a sum of probabilities if the system is finite. If infinite, then you can’t compute (cause the computer will just go on and on forever). Instead, derive a linear equation system that best fits the reality. From there, solve for all the states simultaneously. There’s also the method of expressing reachability probabilities as a least fixed point, solved using the power method. To compute precisely, long run behavior, by computing the solutions to the linear system, you can find out whether there is a limit, whether it depends on an initial distribution or state, and other properties of the system in the long run. I can see its usefulness in understanding a model more precisely.
To do all of this, there is a software that I’m learning to work with, called PRISM. It’s a model checker software that you have to code in order to describe the states of the system as well as the behavior of its evolution. But before I even get into the workings of PRISM, my job this coming week is to dive into the habit of modeling phenomena and being able to formulate them precisely. Also, to come up with a few properties I want to check regarding the models. As I continue doing my research, I’ll keep track of my progress here, so stay tuned!
Instead of talking about technical things for my last post on Metalworks, I’m going to take this time to pen down the things I’m really grateful for.
I came in knowing almost nothing about software development. Previously, I had come in with knowledge of only algorithms, data structures, and basic computer science, but nothing about how to create good functioning software that other people can use. 3 months later, with the patient guidance of my fellow staff members, I learnt more than I could have asked for. Whenever I had a question, nobody grumbled or had any nasty things to say, and they spent time to make sure I understood anything.
That’s what I really appreciated at Metalworks. No office politics. No backstabbing, no gossip. Just really fun conversations about any topics from literature, to cool technology, or the latest happenings around in town. We have this channel in slack called #random where our staff posts really interesting things happening. Lunches are really nice too – we all take lunch away and sit at a wooden table near the pantry. These are a relaxed affair. Very rarely do we continue to talk about work, and it is a great time to recharge.
A huge shoutout to the full time software developers Rollen and Jayden who were such great mentors. They know a lot about software and hardware, are really smart and perceptive, but have no airs about them. They always explained their code really eloquently and allowed me to ask as many questions I can about it. I owe them a lot, and they really made my experience at Metalworks.
And then there are the fellow interns who do a fantastic job and inspire me to work smarter and harder. And then there is the PR manager Daylon whose unwavering focus and zen-like attitude inspires me to get my act together more.
And finally, the heads Tom, Mark and Nico who are really knowledgeable and hands-on about the tech they manage. After asking around considerably, I understand the amount of work they have to do. I see them stay back on weekends to work on the huge amounts of projects that come in, and they keep our job fulfilling by giving us really interesting tasks to handle.
Because of the amount of clients we handle, we are almost always guaranteed an interesting spread of technology to use. From UV Cameras to VR Smell Components, I’ve certainly had to put a lot of quick thinking to the test. Rapid prototyping is a really hard thing to do, because it requires a lot of research. Research usually entails figuring out whether a certain technology can be used, and that means experimenting with the APIs (if any), actually testing them out, and then writing a little report to explain to people how things will work. It’s not easy, as you can run into a roadblock quite quickly.
So, that’s it from me! Next week, I will start work on my computer science research proper and I will be coming up with a proper framework soon 🙂 Stay tuned!
My penultimate week at Metalworks has ended and I have many thoughts about my internship which I will share next week. Meanwhile, I’m leaving for Japan in 2 weeks and have just started research with my computer science professor on model analysis and verification. I’ll speak more about that in another post, but this week at Metalworks was about touching up on my work on image processing and running some debugs.
Monday – Thursday: Running debugs / touching up on code
This was the structure of my code:
for x in range(len(p)):
for y in range(len(p)):
do: <insert task>
Having two for loops is disastrous for run-times in a Raspberry Pi. Even on a Pi 2, it took a good 15 seconds to process – not good enough for the project I was working on. Thus, we threw that out of the window. Instead, we eventually settled for a much much simpler option, which was using the image editor already embedded in the Pi. GraphicsMagick did the trick in less than 2 seconds on the Pi 2, which really makes me wonder how it did it so quickly. The command was really just this: gm convert -modulate <brightness, saturation> <input filename> <output filename>. Jay and I thought it must be multithreading, using multiple processors in the CPU to do the job. Or perhaps they used C++ and pointers to loop through a bitmap instead of an array. I will have no idea as of this time. I tried looping through the image bitwise, but that proved to be too long compared to GraphicsMagick. Incredible, it reduced the code to just one line – the power of libraries.
But that also left me with lesser flexibility to do what I really wanted with the image, which was to enhance contrast given a certain criteria, but while we are at prototyping phase, sometimes we can settle for a little less.
One of the features we tried to implement was when one function was being run by the CPU, could we run another process simultaneously and break the foreground process when necessary?
That brought us to multithreading, which was a whole new world for me altogether. In my algorithms and data structures class, we briefly touched on the topic of parallel programming, but I had no idea how to access this topic. Well, I’ll leave that for another time, but if you want to know a little more about it, a quick google search brought me here (quora), here (stackoverflow) and here (of course wikipedia). I still don’t understand it fully, but I will eventually get there once I’ve learnt enough in class.
Eventually, we used another method to create the same effect we wanted, which was just about killing processes (less elegant, but does the job – sounds like a prototype??)
Raspberry Pi / Command Line
I admit. I do not have enough experience with command line work, and I’m often made to look very amateurish when it comes to doing things efficiently. It is really about the little things like “cd ~/<insert filename in root folder>” or “sftp pi@<insert IP address>” that is the difference between a seasoned user and someone who just flops around on Google searching for command line shortcuts.
And I guess, that concludes the reflections for this week! There really wasn’t much as the rest was touching up on the projects and doing rounds of testing and debugging and pushing code. Watch out for my post early next week on the beginnings of my research.
I’m back! I spent my 2 week hiatus in New York and Boston to attend my brother’s wedding ceremony and I guess take a little break from work. The moment I got back, it was straight to work. I handled quite a difficult task (for me) that was image processing, and I had never done it before. The only little thing I knew was that images are made of pixels and that they could be manipulated. So this post will be all about image processing and a little introduction to computer vision.
There is the Wikipedia definition, but here’s how I see it. Computer vision is a field that concerns with how computers view things like humans do and how they can manipulate the things we see. The most basic example is facial recognition. A computer looks through a set of images and runs through the pixels. There is a certain criteria as to what determines a face and that’s usually by color. By detecting RGB (red blue green) values of an image, the computer is able to determine, whether by a probability model or straight up color prediction, whether there is a face there.
A little more in depth
Usually, an image is represented as a 2D array, one array for the y-axis, and another array for the x-axis, nested in the first array. And each value in the inner array contains the RGB values, represented as a tuple, i.e. (255, 255, 255), and usually in 8-bits. If the image is converted to grayscale, then it is just a single value to represent the “whiteness” or “darkness” of a pixel.
Finding an efficient way to access these pixels are important, because looping through the arrays means lots of iterations. It might be fine for a 800 x 800 image, but for 5MB images that go 5634 x 3687 pixels, that’s a lot of times to go through just one iteration. And that’s just looping through it, let alone making modifications to it. This means that to achieve a reasonable runtime, the algorithms have to be efficient. Imagine for the 5634 x 3687 pixel image, adding one additional step within the loop means performing an additional 9000+ steps. That’s by no means trivial.
And then there’s manipulation
There are a few basic ways to make changes to images, best if they are binary. Then it makes detection so much easier. But what if things are in gradients? My project this week required me to detect and make changes to colors over a spectrum and not just one pixel value. I went to search up a few algorithms that could’ve helped me achieve this, but they all dealt with binary images. For example, how to detect black dots on a color background. That is done by converting the image to a HSV color space and then isolating the black color from the rest. Other ways included using adaptive threshold to input a threshold and then getting the other pixels to adapt to it. Problem is that they are all binaries, and the algorithm sets the colors to either that color or 0 (black). That makes for really unnatural effects, which was not in the job scope of my project.
Write my own algorithm. It’s by no means an easy task, and there’s certainly a lot more sophistication that could go into it. But in layman terms, here’s what it tries to do:
1. Isolate the portion that I want to detect.
2. Crops it out for the computer to work on
3. Uses percentiles to check for the relative brightness of the pixels.
4. Examine the color of each pixel in relation to the percentiles I determined.
5. Makes the necessary changes if it satisfies the criteria I input into it.
There are more clever ways I could go about implementing this, like using block sizes to filter out parts of the image I don’t want to be detected, or using recursive calls to use the previous pixel values detected as the criteria for the changes I need to make. But I had a time limit, and currently, my skills with Python aren’t advanced enough to do it. At the moment, I had been trained so much in OCaml, that thinking in Object Oriented Programming is more difficult for me.
In computer science class, I learnt that anything that runs at O(n^2) is really bad. Like insertion sort is O(n^2), and you could very well do with better sorting algorithms. The algorithm I tried was O(n^2) as it used one for loop to go through the image, to grab the relative brightness, and then another loop to make the necessary changes. However, there was a dependency of one loop on the other, meaning that for the second loop to run, I needed the first loop to run first. The binding of these processes meant that I had little choice but to run two of them one after the other.
Another thing, most processes run on the computer are fast. They do the job pretty well. However, when they’re transferred onto a microcomputer, things can go pretty awry. So for our project, we had to do it on a Raspberry Pi 2, and before that we had to install a few libraries, mainly numpy. Numpy took 1 hour + to compile on the Pi, and we kind of regretted not compiling it first. Also, cause on a computer we can usually override the “Permission denied” errors easily just by doing sudo once, on the Pi you have to ensure it is being done consistently.
My algorithm ran 5-7s on a computer, but ran 20s on a Pi. For our project, that was quite disastrous. And now, I’m trying out other algorithms, but they’re not doing any better at all. I’ll update it later this week.
When it comes to real-life work, runtime matters a great deal.
A little musing on OOP (Object Oriented Programming)
1. No error catching at compile time.
This means that when I run a function, it goes through the compiler without showing any errors. For me, cause I don’t have the habit of printing debug statements throughout my code, I am unsure of where the source of the error is. The system may throw an error saying that there was a wrong type here, but that also means reviewing lines of code all over again to identify what’s actually causing the error. It takes a lot of time to do so, and having the habit of testing your code after small snippets really helps. Also, when code doesn’t run, they don’t tell me either, so sometimes I’m left wondering what went wrong.
That being said, I have enough experience now to know how to set up my own code to find bugs and test properly.
2. Functions are not first-class objects
This means that I can’t just pass functions around in other functions easily. It can get quite tedious to write code after awhile, and I really appreciate functional programming languages for making it so effortless.
This week was a good introduction to computer vision though, even though I probably only scraped the surface. More importantly, I’m learning how to write good code on my own, and understanding how to find errors in them in a new context.
Notice: I’ll be taking a hiatus from this series, from the 20th July onwards till the 10th of August, as I’m going to Boston for 2 weeks! I will be updating this blog but in a different capacity, to document my explorations not relating to my internship.
This week, was really all about the one project I was tasked with. As it makes sense for me not to reveal the inner workings of the project, I’ll talk about some things I’ve explored during the week, and something I did for MakeDay!
Monday: Project / Make Day
Tuesday: Make Day / Project
Friday: Hari Raya!
At Metalworks, we get one day a week to do anything we want to do, preferably non work-related and something creative. We call it MakeDay. The possibilities could go anywhere really, from remote 3D printers to wireless chargers. Some choose to learn a new framework like Ruby on Rails or learn to work the arduino. A previous intern did Arduino Pong, which was pretty cool.
I decided to do something a little different and try out computer graphics manipulated real time. My initial inspiration was this video made by Adrien M and Claire B, where they created this IDE called eMotion. Their main focus was on particle manipulation, and you could come up with really cool interactive graphics from it. I also realised that they had Leap Motion integration so I took liberties to have fun with it. This was the result:
This was just a basic example that explored the different motion brushes to see what kind of effects you could pull off. An important thing I noted was how different effects had different dependencies. They don’t work with all types of particles, and the parameters required to activate them are very different. To get the right strength for the effects is one thing, to use it for a good artistic purpose is another thing. I’d like to explore that a little further. Also, I don’t think I’ve explored the full potential of leap integration just yet, but somehow I guess the leap is quite limited. My goal would be to mimic the above video but with different graphics.
I’ve always wondered if you could script certain effects specific to certain hand motions, for example if you swiped right, the particles would react to the speed at which you motioned at. And there is a way to do that with the scripting feature on eMotion. I haven’t gotten a chance at doing it, but hopefully I will get my chance at it when I’m back from Boston 🙂
I’ve been working on a retail project for the past few days, and it entailed tons of iterations. If anything, I’ve begun to question a bit more than I used to. I kept quiet most of the time because I wanted to observe how things ran, and now I’ve done so, it’s time to start asking questions. I went to clarify the position of the company, what our main value offering is, and what kinds of opportunities are we looking out for.
After 8 weeks here, I think I’ve got a good idea. Credits to Rollen, we do 3 main things: Prototyping, Production and Pitching. Prototyping is providing a proof of concept to our clients. Production is making a product worthy of going public. Pitching is really the initial phase of putting the idea out there for the client. This week was about prototyping. But how do we add value with prototyping? We find new ways to use technology and show how it can be done. We do the necessary research to check for its feasibility and hack away at a very rough design in 1 – 2 weeks. If the prototype we are doing already happens to be implemented elsewhere, then we work on pushing it further ahead by adding a few features that our clients have not thought about.
While building the prototype, modularity is really important to troubleshooting. It helps you identify the problem, isolate the parts that aren’t working, and fix them without affecting the other parts. It is a basic principle in software development, but equally applicable in electronics. It saved me countless number of hours spent resoldering, or testing connections in places I shouldn’t have been checking. And the idea of plug and play makes changing broken components or even just putting together the prototype much more easy on the eye. It’s always stressed in my computer science classes at Yale-NUS, but it’s only when you encounter it in real life that you really appreciate the countless number of reminders that the prof drills in.
But yeah, that was this week, and Boston here I come! Going to play around with this Haskell eBook that Rollen passed to me, and it’s time to do a review in functional programming. More thoughts coming soon 🙂
Work has gotten a lot busier this week. I’ve been tasked to handle a project individually, so I’ve been spending a lot of time on it. Reflecting on 7 weeks at Metalworks, I felt that I’ve become a lot better with handling & reusing old code, coming up with ideas for projects, and researching on how to turn it to reality. This week and this coming week will be the test of whether I can push a project through. It isn’t a big project but I’m counting on small steps to learn and improve.
Monday: Electronics – soldering, proto-boards
Wednesday: Make circuitry more robust
Thursday: 3D Printing first iterations, Processing (to play videos)
Friday: Processing code editing, 3D Printing iterations
Tearing Down and Starting Again Something I gotta get used to re-do things even when they’re going well. I have a tendency to get attached to what I’ve created and create my own inertia to budge from my current path. But there’s a limit to inertia. I built the circuit above and wired it pretty well to function how I wanted it to, but it didn’t fit exactly to the specifications of the design I was required to build, so I had to tear it down. But before I did, I did small iterations with each part of the circuit to make sure I knew what I was doing, and I was quite happy with the results.
In one of the iterations, my supervisor thought of using USB ports to join up the pieces together, and it was a great idea. I had originally used a different configuration to achieve the same effects, but thinking about it, the USB port would really do it. Well it took a while to realize that the name USB does really have a proper meaning to it. Like its called Universal Serial Bus. Serial because it transmits serial data, and bus, because it joins up electric circuits. In mainstream, we often toss around the word USB freely, but it does so much more than simply shift folders around from one device to the next. With a little hack, I managed to create these kinds of connections and get them to work. Being able to throw away old ideas and use new ones isn’t a revolutionary idea, but it is important.
It’s important to iterate, but even more so to iterate properly. This is what I tried at first. An ambitious attempt to go all out at once, which was obviously not going to work. Almost nothing fit. I then worked on smaller iterations, and got the pieces to fit, one by one, along the way discovering certain principles of 3D design, like how much room to give holes in design, and how to make each iteration less time-wasting, like really sizing down on the amount of PLA I’m using. These were all really important, and I managed to get them to fit in the end. Yay! \^o^/
Reusing old code
This project had been done before, and it was important for Metalworks to do it again to build a good repository. Unfortunately, the code wasn’t very well documented, and the old hardware had been torn apart, which is why I’ve been working on it. So I had to reuse old code. It was hard at first. Processing was written in Java, and I hardly have any good experiences with Java, so I was a little hesitant, but my supervisor ran me through the code once and that was really important. On hearing the explanation, I quickly dove into the code, and commented out the document into parts so that I could understand it easily, and hopefully others too. I wrote out explanations for ambiguous parts of the code that I couldn’t catch immediately, including some “under the hood” events. All this seemed important so that the code that I’ve edited and written is more reusable for the next person who may use it.
I found Processing interesting, and in about a day I managed to write a class, make split windows, display text, play a video, and read serial inputs. I’m happy with my progress, and it allowed me to focus on the 3D prints for next week.
The act of creating really excites me. I get really motivated whenever I get the chance to create something I can call my own. I dove in, got sucked in, and now there’s no turning back. I’m going to keep exploring, keep making, and make the most of my time here. As I was thinking, it is rare to have the chance to deal with hardware, and rare to have great support from my team mates, and I will be pushing on from here. yaaaa!
I’m already more than halfway into my internship and I’m kind of surprised. Time really zooms by and I’ve more than fallen into the rhythm of waking up going to work, meeting up with friends, having meals with my mom, coding, reading good articles, and keeping up with my Japanese. In 3 weeks time I’ll be taking a hiatus from work to be in Boston and in 7 weeks I will be in Japan. Life is whirring away.
This week was about wrapping up the iOS app I was doing. The client I was building it for, was still in the midst of negotiation so I took a pause on it. But of course I didn’t want to leave it just like that, so I spent a few days refining the basic functionality of the app and cleaning up bugs. I think I understand much more about software testing. Then, there was an old project that Metalworks had done before and needed some reviving so I took on the reins for the electronics. I revisited my good old friends the Arduino and Sketch-up Make and spent a good 2 days working on it. In a nutshell here’s how my week went!
Monday: iOS app – Direct Messaging
Tuesday: A/B Testing / refining iOS app
Wednesday: iOS app debugging / cleaning up & An old project revived
Thursday: Arduino, Sketch-up Make: Rounded Corners
Friday: Multiple sensors, Make-Day brainstorm
I’m still new to this software development thing, so I spent some time familiarizing myself with fixing bugs. I chanced upon plenty good articles, which I will link here (on bug reporting & triage), here (contributing to open source projects) and here (what makes a good open source contribution). I think all of these give me some perspective as to how software is made good through small incremental improvements made by many different people. At Metalworks, I’m pretty much the only one familiar with Swift, but the method of focused, concise reporting of bugs and dealing with them came in really handy. Maybe I’m a lot familiar with Swift now, because fixing bugs wasn’t too difficult to work around.
Responsible reporting is really important too. Jay was telling me about a previous intern who was very particular about the way he made commits to Git. Every small pocket of change is heavily commented so that it is readable and understandable, and deals with only one issue. That is something I’m looking forward to – I’m starting to implement it in my Arduino code.
Speaking of Arduinos…
I played around with Sketch-up on Thursday, thinking about designing the prototyping casing, and I found this library called Rounded Corners. It makes your designs look much more professional. The only thing is that you can’t undo your rounded corners after you’ve closed it, so only do it for the last iteration of your product, or create backup copies of them. Small experiment I tried out:
Make-day // Processing
I haven’t taken the time to explain an important thing the company does during down time. Once a week we have a make-day where for the whole day (if time permits), we work on something creative, or as the name of the initiative implies, we make something. I’ve been taken into the computer graphics software Processing, and saw some really mind-blowing projects done with it. This video about Pepper’s Ghost illusion with Leap Motion, eMotion and Processing is great, and it got me searching up other stuff people have done. This guy Adrien M did a super job at projection mapping for dancers to interact with. I’d like to build something really artistic and interactive for my Make-day and so I started on a few tutorials on Processing on a Saturday afternoon (yesterday). Using this handbook, I churned through a few of the important basics to know about:
A Small iOS App Preview
I guess lastly, I’d like to document the stage at which my iOS App is at right now. Users can have a profile, edit it, upload photos from their camera or photo gallery, have a news feed to see their followers’ posts, direct message another user, login with Facebook or create their own username. I experimented with AlamoFire a little, and did a tutorial on pulling photos from a web browser, but that’s it, before I started working on the Arduinos. Lets see how much more I can do from now on.
Hopes for next week
Dive into Make-Day, complete the Arduino project, and keep reading, keep learning. I’m still into The Pragmatic Programmer so I will also post snippets of gems I’m discovering onto this page. Let’s go!
The world of iOS programming is huge and I’m definitely right at the tip of the iceberg. I dove in naïvely, taking it step by step, and taking good notes, but it’s almost like walking up a really long staircase and not knowing where you are at the moment. At the end of a long week, I still probably don’t know how skilled I really am at it, but I know what I know, and I know what I need to do next. I guess this post will be more about documenting my journey through iOS programming and how talking to other people in the office helped me along.
Monday: Finishing up iOS course on Udemy
Tuesday: Implementing news feed / Facebook integration with Parse (in Swift)
Wednesday: Facebook integration / Changing usernames
Thursday: Facebook integration testing / Implementing direct messaging
While I spent the week coding, I really appreciated how there were always people asking me how I was doing. It was in a more concerned way instead of like an intrusive way. By intrusive, sometimes I get the feeling that you’re asked “how you’re doing” for the sake of checking up instead of you really being asked how you’re doing. I feel like people here care. As I was open more about my work, not only could others help me out, but articulating what challenges I was facing also put my work into perspective. Is this problem worth fixing? Should I move onto something else first? Are there other features of a higher priority? Sometimes delaying the search for the solution helps me out because I can take time away and find a proper answer for it. This is my key takeaway for this week:
Opening up about your work is important. By being transparent about your struggles, you gain clarity from the process and helps you be more efficient.
Working with new updates is tough
It’s feeling like I have very little support. I’m using Swift 2.0, and most code was written for v1.2, meaning that the difference between me succeeding at implementing a certain feature may very well just be a syntax difference, because Apple’s language conversion from Swift 1.2 to Swift 2.0 isn’t perfect, and there are plenty of things to debug. Problem being that there aren’t many solutions out there, and the solutions haven’t been tried and tested, so I also need to go through the process of experimenting. The good thing is that experimenting gave me some insight into the fundamentals of class inheritance, libraries, closures, type declaration, view controllers etc. I needed to read up on lots of documentation to gain clarity on what I was actually dealing with, and that was good practice. Also, translating from one language to another can be quite rewarding, especially if it’s Objective C, which is quite horrible a language to read really.
Case study: Facebook integration with Parse.
Login is difficult because Parse and Facebook weren’t aligned. When integrating with Parse, I had to update my info.plist file. Essentially the info.plist file is a document for Parse to go through in order to understand how to utilise the Facebook ID and name for the app. I had to add an array called “URL types” with an inner array called “URL Schemes”, but it wasn’t updated to match with the Parse SDK. So I could not activate the actual login. Little did I know, upon more Googling, that I could use “CFBundleURLTypes” and “CFBundleURLSchemes” in order to activate native log-in, which is using a website instead of a pop up. Refer to this for more info, code is as below:
Username problems: I always thought it was possible to pull a user’s username from Facebook and use it for the database, so I went scouring the web for methods to do so. I couldn’t find it. Eventually stackoverflow redirected me to Facebook’s changelog in the developer’s page. Apparently they removed the /me/ method which allowed me to pull the user’s username and profile picture. Instead, I had to find other ways around it. So I went to create a new view controller that allowed a person logging in through Facebook to update his username in Parse, because only the objectId remains constant. However even the method to accomplish that wasn’t clear to me because I initially thought all I needed to do was query the object from Parse first, and then do a simple replacement using “=”. Nope. You need to use the method “setValue”. And then you need to “saveInBackground” right after that. These aren’t immediately obvious, but searching on stackoverflow really helps. This told me about Facebook’s changelog and this gave me the idea of having a new view controller to change the person’s username.
A general iOS beginner problem
Documentation is huge, and it definitely isn’t easy to down everything and still pick up what you need. It’s almost like learning a foreign language and reading the dictionary to know what you need to know. Usually you go situation by situation and pick it up along the way until you get to where you are.
Except that in going situation by situation, I really needed to know how to pace myself, and monitor my progress. It can be hard, when to understand a concept, you need to understand its pre-requisites, and before the pre-requisites, there are more pre-requisites to understand. It gets tiring, and recording everything down on paper makes the experience a lot better. An example is regarding view controllers. I followed the tutorials online, and along the way I was wondering what was the difference between the different types of view controllers, and this gave me some perspective. In a nutshell: Having specific view controllers make your life easier by importing certain delegates that you need, but you sacrifice flexibility.
That being said, here are some things I’ve managed to come up with so far:
All functionality to this date, works well. But there will be more tinkering to be done.
I could talk about iOS programming in a lot more detail, but I think it’s also worth documenting my journey at walkabout.sg. walkabout.sg is an open house for start-ups and whoever is interested in it. Participants are able to speak to different companies and understand what they do. Half the time, I found myself speaking to companies more interested in funding, and companies who were more focused on working rather than welcoming visitors.
Fair enough, I joined walkabout.sg as a part of Metalworks’ field day where we made connections, understand what was current, and perhaps get some inspiration. We didn’t quite fall into the category most start-ups were looking for so there were quite a few uninspired conversations. However, I will talk about a few companies that made me really excited.
Smove is what you can call a car-sharing company. They have a fleet of about 15 hybrid cars, and 6 electric cars, in around 25 different locations. One signs up using their ez-link card (as a key for the car), credit card (for payment) and driver’s license, and they’re good to go! They have an interesting philosophy of standardizing their fleet for best user experience, and have a really smart way of monitoring their cars. In every car there is a black box which sends data to a control centre, where they can monitor the condition of the car. This is really smart as it makes maintenance efficient as tracking can be done pretty much real-time. That’s sweet. They plan to expand their fleet to 100 cars by the end of this year, and I can’t wait. Currently, I just wonder what more they could do with data from their blackbox and there are many things they could do to improve user experience (geo-location, usage frequency, hotspots etc.)
Upon recommendation by Jayden, we went to Thoughtworks, located along Amoy street. They are a consulting agency, but it’s not just about consulting. They receive client briefs and often come up with completed products. From their presentation and Q&A, they seemed a rather intense bunch, people who code for fun after their office hours, hold meet-ups in pretty much any discipline, and even have a graduate program to inculcate fresh graduates into their special brand of thinking. I’m curious to find out more, and will probably drop by their meet-ups to find out more.
We got a chance to see the government funded IDA Labs, an incubator for interesting projects. We chanced upon this guy called Grey, who is the co-founder of TinyMos, a camera development company that is currently dabbling with astronomy photography. They’ve made a really neat portable version that takes beautiful photos of galaxies and planets. One of them being Andromeda, which they took from Mersing in Malaysia. The camera is so small it’s ridiculous. They’re planning to present it at TechCrunch so I shall say no further, but there you have it.
Grey was also hospitable enough and showed us to the IDA Labs below, where other groups could use the facilities. Such groups include start-ups, students working on final year projects etc. Take a look at some of the 3-D printing jobs people have done:
And cheap thrills, we got to the Uber office!
Hopes for next week
I really hope to strive on with my iOS development. It has spanned about 1.5 weeks so far, and I’m really enthusiastic in diving deeper. I’m slowly uncovering Swift, and hopefully, I’ll be able to document enough to sustain my understanding over a longer period of time.
There are plenty of resources out there. Apple’s documentation is excellent, but for someone who was new to iPhone development in general, I thought it was good to learn it from someone who has the experience to guide you through the little kinks in building an actual app. So I went to Udemy, and found this course where you can build 14 real-world apps. Apple’s documentation is great as a reference for syntax or to access properties of controllers etc., but nothing beats having to actually build apps from scratch and have a community where you can spot things that are outdated in the tutorial or know why XCode is crashing.
How did I approach the online courses?
The teacher Rob Percival always gave viewers the opportunity to stop the video and try it out on our own. And so I did. I took time to break down the structure of the applications and experiment with merging different types of functionality together. I often found myself frustrated by not understanding certain concepts behind controllers, delegates, constraints, closures, and I had to do plenty of searching. This required understanding the problem enough to use the right keywords when searching. Eventually, I also resorted to penning the concepts down:
What it takes is real strength and determination. And also a good environment to learn. Set these up properly and the others will follow.
What have I done so far?
Here are some of the apps I’ve tried:
Well these aren’t the most mindblowing apps, but come tomorrow, or the day after, I’ll be able to create an Instagram clone, which I hope to use those skills in the next project we have.
What else did I do?
I didn’t just do online courses though. I spent time doing research on the feasibility of some preliminary ideas the company had, and doing some sketches to communicate our ideas to the client. I was also called upon to explain the concept of our earlier project in our ideaShow. If the video is up, I’ll be posting it here too ^^
Hopes for next week
I really hope to get up to speed with my learning and start applying them quickly. I’m also looking forward to seeing the new interns next week! Alright, more updates to come.