After using hexo for a while, my thoughts are that the concept is pretty great except I’m itching to start a new project. The next iteration of this blog will probably be a node/reactjs/redux project. While I usually try and write everything from scratch, I’m not going to reinvent the wheel too much this time. Writing a markdown to sql conversion script should prove to be an interesting task. Then again, who knows. I might just try and dig into hexo and see if I can make it do cool things.
As I was working on something earlier tonight and I ran
git pull to fetch updated code for a 3rd-party library, it truly dawned on me how amazing it is that we live in a world that is so interconnected where I can hit 8 buttons and fetch code that was written from almost anywhere. It’s not just code we’re pulling — this is someone’s hard work. After reflecting on that, I decided to take a drink and toast the person who had produced the code that I was using.
Right when StreetFighter V dropped earlier this year, I picked it up along with a Hori Mini 4. Now I’ve done a slight upgrade to a Qanba Q1, which is supposed to be a great beginner fightstick before you feel like throwing down the $200+ for a MadCatz TES+/TE2+.
One problem. StreetFighter V doesn’t recognize the Qanba Q1. At the time of this post, SFV on the PC only recognizes Xinput controllers (xbox360). All you have to do is get an Xinput wrapper application such as Xoutput which lets DirectInput controllers talk to Xinput APIs. Follow the instructions to install the Scp driver and then run Xoutput and configure the Qanba Q1 with these settings.
There is no button 12 so that’s what I put for the Back command since the controller doesn’t seem to have a back button. You can just leave it blank. I couldn’t figure out how to blank it out once I put the value in (which is really silly).
I’ve moved the site over to hexo and it is pretty sweet so far. At last generation, it created 213 files in 1.09s. Probably not nearly as fast as hugo, but it’s good enough for me. Still getting used to the whole deployment portion as well. Right now, I’m just manually deploying the rendered files via ftp, but I’d like for hexo to manage all that. We’ll see!
It is now 2016 and the age of static site generators is in full bloom. There is no end to the number of new engines/platforms emerging so I figured I’d take the plunge and try it out. I looked at a number of different options including hugo, ghost, and jekyll. I settled on hexo mainly because the community seemed fairly large and it was written in nodeJS, which has become my framework of choice for most new projects nowadays.
Why static generation? There are many reasons why you would want to stick with a dynamic website. Let’s say you want customized layouts for each user. You couldn’t really do that with static sites because it would defeat the whole purpose. Static site generation is about taking all that work that happens during a page hit, such as the application processing and database hits, and moving it to the moment you need to do any actual work to the site. Need to write a new post? After finishing the content, you would generate the new set of static files for the site and all the processing (and maybe database hits) take place then. When someone clicks a link to the new article you wrote, it serves up a static file which is lightning fast. Of course you could still achieve dynamic pages with scripting, but this is really about addressing the simple needs of most users.
I’m going to convert iamchung.com over to hexo for a whlie to give it a shot so we’ll see how it actually performs! I’ll write another post once the transition has been complete.
After nearly 7 and a half years at my last job, I’ve finally moved on and gotten a position at a new company as a senior software engineer. It took about 2-3 months of serious job searching and even though people will tell you that it’s easy to find a job in the tech sector nowadays, it wasn’t that easy (at least for me). I had a lot of applications out and went to a lot of interviews. I’m going to try and distill a little bit of what I learned.
The tech interview is a complicated animal. You have the initial non-technical phone screen where the HR person basically asks you all the typical questions like “What made you decide to look for a new job?” or “What did you like most/least about your current job?” or the classic “What made you decide to apply with us?”
After that, you go through a tech phone screen where a fellow techie will begin to grill you on some technical questions such as “What tech stacks have you worked with in the past? What did you like/hate about each one?” or the generic “Tell me about a problem you’ve encountered before and how you solved it.” Then they’ll start going into some real tech questions that may or may not involve an online collaboration website service so they can see how you code. This can include the typical interview-style questions similar to “Write a function that accepts a string as input and determines if the string is a palindrome” or, if you’re lucky, the standard Fizzbuzz. If it’s an algorithm question and the company is an actual tech company, they will probably supplement the problem with questions about the runtime (big O) and then might change the requirements ever so slightly to see how you adapt to change.
If you’re lucky and managed to pass all the phone screens, you get invited for an onsite interview. This is the hardest part, but even if you don’t end up getting the job, making it this far is still rather awesome for most companies in my opinion. Besides, if you make it this far, a lot of tech companies tune their interview processes to weed out false positives, which means they get false negatives. In other words, they might have rejected you the first time, but that doesn’t mean you shouldn’t try again later. They might miss really good engineers, but at least they’re not hiring people who can’t do the job. An onsite will typically last half a day or the whole day. The onsite is your best opportunity to interview the company as well. Don’t think it’s just them asking you questions. This is your time to make sure this is the place you want to work for. I had an onsite which I breezed through early on in the job search and that set off red flags for me because it seemed like they were just hiring anyone they could. You don’t want to work in that kind of environment because you can’t trust if your coworkers are competent or not. Throughout the day you will meet with a bunch of different people, most likely members of the teams which they are thinking of placing you into. This is to make sure you “fit” into the team culture and meet their needs. Some companies will make you go around to all the different people, while others keep you in a single room and bring everyone to you. This is usually not a cakewalk. During the onsite, you will be talking (mostly re-answering all the questions you’ve answered during previous phone screens) and whiteboarding. In my opinion, whiteboarding is the hardest part of the tech interview process. You are given limited time, limited whiteboard space, and a lot of problems. They want you to write bug-free syntactically correct code (no pseudocode) to see how well you do under pressure. I find that the better tech companies mostly ask generic programming questions, but if the position requires specific knowledge of certain platforms/languages then I’m sure they focus on those specifics.
After the onsite, you’ll probably talk to an HR person who will tell you that the company will get back to you as soon as they’ve decided whether or not you got the job. At this point, if you got the job, they will probably let you know very quickly. If you don’t hear back from them, that is most likely a flag that you did not get the job and they don’t want to put themselves in a position of liability by telling you that (which totally sucks). You’d think the bigger companies would be better about this, but even they do it. If you got the job, then congratulations are in order. If you didn’t, don’t feel too down. Try and remember where you might have had difficulties during the onsite and focus on training yourself to not make those mistakes again. If you can get the HR person to tell you exactly what you need to work on, that’d be great. Unfortunately I’ve found that most HR people want nothing to do with you if you didn’t make it past the onsite. As soon as you’re up for another interview round, they’ll be your best friend, but as soon as they get a negative interview result, they pretty much just clam up and try to speak to you as little as possible from what I’ve noticed. Anyway, hopefully these notes help someone out there.
“You’re standing on the surface of the Earth,” Musk begins, according to the book. “You walk one mile south, one mile west, and one mile north. You end up exactly where you started. Where are you?”
This is an interview question from Elon Musk to job candidates. There appear to be two answers. The first, the north pole, is the obvious one. The second is not so obvious. At any point on a ring somewhere close to the south pole, you could walk one mile south, then turn and walk one mile west only to end up right where you started walking west (circumference would have to be 1mi), turn and walk one mile north to end up right back where you started.
Finally getting around to writing on this thing again. Some things have happened since I last wrote. I got engaged around the end of October last year to an amazing woman. Woot! We decided to give ourselves a wide berth for the wedding so we’ve set it sometime in 2016. I’m working on a separate site for the wedding engagement stuff. Mostly an informational static setup to show off some engagement pics (which have yet to be taken) as well as just general information like the who, what, when, where stuff. I’ll post about it later whenever it’s ready.
I’ve also started listening to hip-hop. It’s not like I never used to listen to it ever, but I’ve always been a rock guy. I could only take hip-hop/rap/r&b in small doses before. Now it’s all I want to listen to. I wanted to start something new for recording my experience while exploring the depths of the hip-hop multiverse, but realized that I already have this blog and should probably use it rather than start something else which I will slack on maintaining.
As for programming, I’ve downloaded the new Unity studio and am currently trying to learn how to make a mobile game for android. They’ve really put a lot of work into the studio and for the most part, you just supplement generated code with custom scripts which you can conveniently write in C# (some would call that cheating, but for a newbie like me, I call that awesome).
I’m slowly getting around to playing the guitar again and thought I’d try and learn the song Jolene by Dolly Parton to help facilitate the readjusting period.
There’s also this version which is the original slowed down to 33rpm (spoilers: it’s awesome).
This one shows The White Stripes performing it live.
And here is one by Dolly Parton’s god daughter, Miley Cyrus.
A little while ago, I created a recipe on IFTTT to link my facebook to this blog and turned it on to see how it went. Today I’ve turned off the recipes and have decided to write specific content for the blog. The point was to try and intersperse some of my social media posts with my blog posts, but in reality the blog just became a mirror for my facebook feed since I don’t post as frequently as I should.
IFTTT itself is still a rather useful service in my opinion. One of my currently enabled recipes turns my phone to vibrate mode once it’s connected to the wireless network at my office. Once it disconnects from the network, it puts the ringer back to full volume.
I really think IFTTT is on to something here. The ability to tap into events that happen in your daily life and use those as triggers to initiate actions is essentially an API for life. We just need to expand it and see what comes out of the explosion of apps and services.
I’ve been making a push lately to do more with linux. I shrunk my main Windows partition and installed Ubuntu 13.10 on there with the stock Unity interface. As usual, after a few days, small things would get to me about the interface and just linux in general. I just didn’t like the way Unity looked. The interface certainly looked nice, but the interaction just wasn’t quite there yet for me.
Then 14.04 came out and I started reading about Xubuntu. I’d never really tried a different flavor of Ubuntu, always going for the stock version because I figured if I had any issues, most support would probably be for the vanilla. For those that don’t know, the different flavors of Ubuntu are differentiated by the desktop environment they use. Kubuntu uses KDE, Lubuntu uses LXDE, Gnome Ubuntu uses Gnome (durrrrr), and Xubuntu uses XFCE. While the underlying system is the same old Debian-based Ubuntu in all, each feels very different. I went with Xubuntu over Lubuntu because I didn’t want just minimalism. I want some features too!
Rather than do a clean install of Xubuntu, I had just upgraded my Ubuntu system from 13.10 to 14.04 LTS, so I followed the instructions and proceeded to install Xubuntu and get rid of some gnome dependencies I didn’t need anymore. After the installation, I rebooted, switched the desktop session to Xubuntu’s Xfce, and logged in. Beautiful. Just beautiful.
Xubuntu has a central startmenu type menu system attached to a panel at the top of the screen named Whiskermenu. It has a dock system similar to that of Gnome including a port of the panel indicators (mail, bluetooth, network, social, etc) that we’re all used to. The Nautilus file manager has been replaced with Thunar, a lightweight file explorer. It’s noticeably faster than Nautilus but that’s probably because most frills have been stripped out. I quickly realized that the Ubuntu software center looks nice, but is basically crap. Synaptic package manager blows it out of the water. For a dock, I’m using Cairo Dock (think osx dock) which is a bit weird to get used to at first, but works fantastically once it’s properly configured. I’m still using Deluge as a torrent client and apparently a lot of docks support Deluge via dock widgets that show cool things like total up/down speed, number of active torrents, etc. I’m using Google Chrome as my web browser, but for some reason it runs sluggishly in Xubuntu. The pages load fine, but any time you click on something or right-click, it takes ages for the application to respond. For video, I’m using SMPlayer, and for audio, I’m using Clementine. Both seem to work fairly well so far, but I’ll admit I haven’t given them a thorough run through.
Now to gaming. Still the number one reason I primarily used Windows at home. I installed minecraft which works amazingly well on linux. Then I installed Steam. Steam works surprisingly well on linux. I did have some odd little issues initially such as TF2 starting on the wrong monitor and the mouse pointer screen location differing from where it actually clicked, but after getting those squared away, everything just worked.
I’m still working on migrating over my software development process (editing everything with gEdit just doesn’t cut it) which has me trying out a lot of new programs. I will try and write another blog post detailing what I find, but for now, I’m going to just enjoy getting used to linux and Xubuntu!
As I get older, I notice my motivation to write blogs/social media has decreased significantly. Perhaps this is why products like Snapchat and Tumblr have largely escaped my attention. Shoot, I still use IRC, the dinosaur of internet communication. As for what I’ve been doing recently, I’ve put some work into a django port for thehardwareproject.org, tried out some new programs to replace ones I’ve been using for years (Deluge replacing uTorrent, etc), and am actively looking at starting up some new dev projects for 2014/15. I’ll probably try and redo trekktalk.com into something useful, and maybe give Android Studio a shot when I attempt my next android project.
This was just too funny.
So on the advice of my friend Luke, I bought Rocksmith a little while ago even though a new version is coming out in October (Rocksmith 2014). It’s actually a pretty good deal right now on Amazon (link) because for $25 you get the rocktone cable and the PC version of the game on DVD but comes with a steamplay code so you can install it via steam to have it on all your computers. I’m not exactly learning on Rocksmith since I’ve been playing off and on for around 17 years, but I’ve found it to be incredibly useful and pretty fun.
I think the best way to think of Rocksmith would be not as a video game, but as a practice or training aid. It’s not meant to be the teacher, god no. But it is a great supplemental tool for practicing and building up skills. Interestingly enough, your goal while playing is to keep doing each phrase perfectly so you can level up that phrase and get more notes (more points) until you reach the end game (master mode), which is no notes at all. This is an example of a song in normal Rocksmith mode. Excuse my crappy sounding D5.
I’m on twitch TV now and occasionally stream my Rocksmith sessions so if you want to check that out, click here. But yea, back to the main point. This is the future. I can totally see programs like this being used by music teachers to supplement home practicing. It’s not a bad idea since most music students would be younger and a video game might be better at holding their attention. Perhaps the days of dreading the practicing of instruments is over, but we’ll see.
Spam, the food (not the email), has been a staple in my family’s pantry for as long as I can remember. When I was younger, I thought it was like that for everyone’s family. But as I started to find out as I got older, Americans generally disliked spam. From what I gather, it seems like most people associate spam with a lower class of eating; something along the lines of “I’d only eat that if I was poor.” I still meet people today that have never even tasted spam.
I’ve since read up on it and discovered that South Korea is the number three consumer of spam in the world, with Guam and Hawaii being the first two. In fact, most of the places where spam is popular coincide with locations of large US military installations. So basically, the army was providing spam to its soldiers and the natives would often get it through the black market or barter. If your country has a large US military presence necessitating large quantities of spam, food is probably a scarce resource so something like spam would be considered extremely valuable.
I still keep up the tradition of eating spam, usually just fried up and served with rice. Last time I went to the grocery store to get it, there were numerous varieties. They had pepper, jalapeno, bacon, and some others. I will always just get the original (and maybe the bacon on occasion) but I’m glad they’re trying to innovate in order to appeal to a newer generation of spam eaters. The legend of spam must live on!
Lately I’ve been getting more and more interested in some non-tech topics such as photography and fashion. I’ll save photography for another blog post, so let’s talk a little about fashion today, specifically jeans. More specifically, raw denim.
I’d gone most of my life without really ever hearing the term “raw denim.” I think I’m not alone in this either. In our pre-fabricated consumer society, innovation (and laziness) has led us to an age where we really don’t have to maintain our possessions anymore. Cars have been reduced to just getting your oil changed every 3-5k miles. Pre-packaged/fast foods have made it so that you could probably go your entire life without having to know how a stove works. Clothing is now so cheap that it’s quite normal for people to go shopping every season for new clothes because their old clothes are already frayed or coming apart at the seams.
Up until a few months ago, I kept wondering why my jeans (which I wear every day) would end up becoming ultra soft only after a year. Looking into my closet now, I have about 5 pairs of banana republic relaxed fit jeans. Three of those are of the “ultra soft” texture I mentioned. Only the remaining two have any “crisp” left in them. Then I read up on jean maintenance and realized my aggressive cleanliness is what did them in. While reading up on the damage washing does to jeans, I found out about raw denim. So basically, towards the end of the jeans making process, the jeans have indigo applied to them, and are then washed, which removes a lot of the indigo and sets a very standard fade pattern. Raw denim just skips that wash step. The indigo is still mainly on the fabric while you wear them.
If you decide to wash them, the general rule is to wait about six months (to get a proper fade) and then it’s really just a matter of how far you’re willing to go. For normal people, you could probably just flip the jeans inside out, do a simple cold water wash with a tiny amount of detergent in the washer, and then hang dry them upside down. For the more hardcore people, you could fill up a bathtub with cold water, mix a little detergent in there, flip jeans inside out, put it in the tub weighing it down so that it’s totally submerged, leave it for around 45 min, rinse with cold water, and hang dry upside down.
There’s a process some manufacturers use known as sanforization that basically pre-shrinks the denim for you so it’s not so much of a shock when you first wash them. If you happen upon a pair of unsanforized raws, you are supposed to go through the ritual of pre-soaking them which entails you putting on your new pair of jeans and soaking yourself in a bathtub to pre-shrink them to your body. There is no end to the shenanigans people will go through in order to achieve the perfect fade. Some people have even given up on washing their jeans at all.
As for me, the reason I’m getting around to posting about this today is that my pair of Gustin #12 Charcoal, made out of sanforized raw selvage denim from Japan, was finally delivered. After wearing them for the past 3 hours, I love them. It’s a great fit and they just feel so crisp! I’ll take some pics and try to document the experiment of my first fade. Thankfully I don’t have to pre-soak because I really didn’t feel like sitting in a tub with my jeans on, haha.
First off, let me say it’s been a while since I’ve last blogged. Since November in fact. And the reason I’m writing here now is, yup you guessed it, I need to rant yet again. So here’s me getting my rant on…
SimCity 2k was the source of countless hours of fun for me back when I first started getting into PC gaming. I remember numerous nights of playing till 4 in the morning, trying to build the “perfect” city. Since then, I’ve played SimCity 3k and SimCity 4 and loved them all. There are plenty of facets to the SimCity games for there to be an appeal to most gamers who play sim or micro-management type games. Network management (traffic), city layout, finance management, and market manipulation are just a few I can think of off the top of my head. What I really loved about this game though, was that the sky was the limit. You were the mayor/God. You could transform the earth. You could rain down destruction/disaster if one of your sims even looked at you funny. You could demolish an entire row of high-rise condominiums without even evicting the tenants first, just to build a new shopping area with a nice medium-sized park so your sims could have their own Tysons Corner Galleria area. If you could imagine it, you could at least try to build it.
Fast forward to today. SimCity 2013 has been out for a bit now and the reviews that were originally positive have since turned negative. Maxis did some initial damage control going so far as to have a Q&A on Twitter with the head of the company, Lucy Bradshaw, back when the focus of the problems seemed to be on server issues and players couldn’t get online. Now that the initial launch week stress has cleared, players have come back with new grievances which the company might not be able to just throw more servers at, including pathfinding issues as well as an overly simplistic agent model which might end up causing more problems than it solved. Couple these problems with the anonymous simcity dev who earlier this week said in an interview that the company’s claims of the game depending on online connectivity are exaggerated (take with grain of salt), and you have a recipe for hell week in the PR department at EA/Maxis.
As for me, I’ve returned the game for a refund because it was just unplayable and I wanted to send a message with my wallet. Do I think they’ll listen? Probably not. Do I think the game was worth $60? Nope. I’ll most likely pick it up when it goes on sale for like $10 (which is probably going to be sooner rather than later I’m guessing given the current bad press around the game).
I managed to get a copy of Windows 8 the other day for $15 so I decided to give it a shot even though I’d probably not have upgraded for a while (if at all) because of all the negative press. Well, it wasn’t solely the negative press. I’d read up a bit on the metro UI changes being introduced and was a bit hesitant on having to learn a new UI paradigm, but I’m glad to report my fears were overblown. After a few hours of casual usage, I’ve slowly come around to the new OS and have even come to appreciate some of the design decisions (while lamenting others).
I didn’t have too much on my laptop that I couldn’t get back with a cloud sync. I used the Windows 8 upgrade assistant program off the Microsoft website. It downloaded the Windows 8 image and then let me decide if I wanted to put it onto USB, or optical media, or just run right there. I put it onto USB in case I wanted to use it for the future and proceeded with the install. It went rather quickly on my Thinkpad T520 and was probably up and running in about 15-20 minutes. Then it finally booted up to the actual “Start” screen with the tiles.
I WER CONFUSED. Not gonna lie. This was a bit intimidating at first. I clicked on the IE tile and got a not found page because I hadn’t connected it to the wireless network yet. Around this time, I figured out that the windows key on the keyboard brought up the start screen. So I tried out the desktop tile and it brought me to something that resembled the Windows 7 desktop minus the start menu. I started getting to work doing all the post-installation work like setting up the wireless, installing Chrome (although IE 10 wasn’t bad from what I saw, but… IE… lulz), checking out the new changes to Windows Explorer, etc.
I’m still messing around with it and getting acclimated to the different application modes (metro vs desktop). Some of it feels awkward on the desktop and you can see where it wouldn’t be so bad on mobile (eg. the Games app tile). One quick observation is that file utilities such as copy/delete seem to work insanely faster. I’ll report back on what I find later.
So we’re looking for the word “the” with the case-insensitive flag enabled so it will match “the”, “THE”, “The”, etc. Here’s some sample subject text:
If we go with this “as is”, we’ll get something like the following:
This will alert “The”, but what about the other instance of “the” in the sentence? How come that wasn’t matched? Oh! We forgot to set the global flag on the regex object. Now it should look like this:
And the matching code now looks like:
Now we get both instances of “the” in the sentence. So what exactly is going on here? Let’s slightly change that last block of code to this:
So here, we see the regex object has a property called lastIndex that is set after every iteration of “exec()”. If you run the code, you’ll get something like “The 3” and “the 34”. Once “exec()” finds a match, it sets the lastIndex property of the regex to the character right after the matched text. The next time it runs through the loop, it checks this lastIndex position and starts from there.
I’ll end this with a couple of cautionary warnings:
- Don’t set the regex inside the loop. This would cause the lastIndex property to always be initialized to zero, hence, infinite loop if it was run against text with a match.
- Modifying the subject string during the exec loop is dangerous and can also lead to an infinite loop.
Hope this helps some people!