Fuck Pressure

I hate pressure.

I hate pressure so much that I do whatever it takes to avoid it, consequences be damned.

The pressure to have a really awesome personal site as a web developer kept me from ever making one.

The pressure to be the kinda guy that girls would wanna date kept me from appreciating myself, quirks and all.

The pressure to live up to people’s expectations kept me from thinking critically about what I was doing and if it really made sense to me.

Pressure inflates certain points and downplays others. It becomes so much harder to make clear-headed rational decisions when it matters most.

“Dude, you can’t get diamonds without pressure”

Fuck diamonds, fuck pressure and fuck you too.

You do have a point though.

Pressure pushes us accomplish things we would never have, under normal circumstances.

I’m not saying pressure is stupid or pointless. It’s definitely made me more aware of how much further out my real boundaries are.

Even when I fail to overcome it, I still learn valuable lessons.

After years of being overwhelmed by the pressure for an awesome site, I learned that waiting for things to be perfect isn’t worth it.

Conditions not being perfect don’t put me under pressure anymore.

I never became the guy I thought girls would want, but I learned that some girls like me for who I am, and it’s much better to be in a pressure-free relationship than to continually struggle to live up to arbitrary expectations.

When you overcome pressure, you expand your limits. When you cave to it, you learn about yourself. As frustrating as it is, it definitely has its benefits.

I guess the best way to summarize this is “fuck pressure but embrace it”.

What Does Success Look Like?

I’m in a Whatsapp group conversation with a few friends of mine. Our conversations typically revolve around hip-hop, decoding lyrics and proving how much more talented our favorite rappers are than everyone else’s favorites.


Eminem is Alex’s favorite. He’s been rapping for many years, has a lot of awards, is critically acclaimed, earned the respect of his peers and is financially successful. He’s also succeeded in an industry that doesn’t have very many people who look like him. Totally understandable pick.


Nivlek likes Kendrick Lamar and Lupe Fiasco. Both newer faces than Eminem. More importantly, supremely talented. Layers of cleverly laid references in their lyrics. Definitely not as financially successful as Eminem, but their wordplay is outstanding. They produce high-quality music.



Me? I like MF DOOM. Crazy intriguing lyrics and overall very talented. Even produces and makes beats. His face is always hidden behind a giant metal mask. Financial success? Who knows. Most hip-hop fans don’t even know him. He’s that obscure.

I brought this up because after my conversations with Ire, I’ve been thinking a fair bit.

What does success look like to me?

Is it looking more like Kendrick? Overflowing with talent and a promising future in the limelight? Or do I want to be like MF DOOM? Completely out of the public’s eye, but those in need my skills know where to find me?

Do I wanna be like Eminem? Most of the world would never consider me successful if I don’t have an obvious pile of money. A nice big house. Great car. Very popular clients. Plenty of accolades. Funded by popular investors. A product that’s considered a unicorn (don’t ask .. it’s a nerd thing). Do I need the trappings of success to be a success?

Can I be more like Kendrick? I’d be the guy the Eminems of software engineering wanna work with because I’m oozing raw talent so they need me to build something amazing, special and really technically impressive that’d give them an edge? Is this what success looks like?

Or is it MF DOOM that speaks to me more? Unknown to the general public, but the person that Kendricks and Lupes of software engineering look up to me because I build things that help them be awesome. It’s probably not as financially rewarding, but do I need mountains of cash to be a success?

Why I Write

Had an interesting lunch with Ire today.

She writes about web development on bitsofcode, which is ridiculously popular. People have shared stuff she’s written on sites like Reddit, and some of her posts have gotten to the front page. Amazing!

In contrast, nobody reads what I write or uses what I’ve made. I’d come to believe that will change over time. Then I met Ire.

She started writing in March and already gets so many people reading her blog and using the things she’s made. She clearly knows a thing or two about making things people want. I asked her if I could buy her lunch, hoping I can pick her brain over the meal, and she agreed.


During our conversation, she asked me a few pointed questions. Answering them opened my eyes.

“Who is your audience? Who do you write for?”

“I write for myself.”

“Oh you mean people like you?”

“No .. I actually mean for myself. Once, I wrote a post with literally no objective. I let my mind wander and just wrote.”

Interesting. Why was I surprised nobody else reads my blog?

She writes to help others learn about building websites. She researches and puts together brilliant, concise pieces that help you understand a little bit more. An overwhelming majority of her posts contain information that is valuable to me, and I’ve been building sites for more than 5 years.

People find her posts useful and share it with others. Her traffic isn’t coincidental, really. She writes for others and the results are commensurate.


Even the things she’s made gets use.

“When I’m building, I make it really easy for people to initially use and customize. They can go deeper and tweak it more later on, but it needs to be super easy for them to use in the beginning.”

When I build outside work, it’s typically to an audience of one. Myself, or the friend who is going to end up using it. It usually requires modification for others to use it. Why am I surprised nobody uses stuff I’ve made? It requires an upfront commitment in the first place. Nobody has that kind of time.

Overall, it was a very enlightening conversation. I’m writing to an audience of one. I’m building for a user base of one. The results are commensurate.

Adim and I started blogging to start networking with developers in Nigeria. We felt isolated and wanted to get to meet our peers, connect with them and work on interesting things together.

To be honest, I’d also like for people to know what I do, what I’m capable of and want to work with me. That’s why I started writing more frequently, documenting my thought process as I work.

The trouble is I’m writing with no consideration for my target audience.

This is more like a public diary than anything else, at the moment. Nothing wrong with that, but it doesn’t address why I write.  If I want specific results, I need to do things that move me at least one step in that direction.

I need to take some time to write for others if I want them to read my writing. It needs to be packaged in a format that they can quickly glean the information, because apparently other people don’t like reading. I knew this, but since I love reading, never really optimized my posts for most people who don’t like reading.

Making sure things I make are easy for people to pick up and use with as little configuration as possible is super critical. Adim’s always talking about things “it just works” when we get into our Apple vs Android / Windows discussions. Build it so it works with the least amount of fuss and configuration. Make it so that it just works. I guess I’m starting to understand what he means.

I’m not going to stop writing for myself. I like doing these sorts of things. What I need to do is keep in consideration why I write.

On a regular schedule, package posts for others. When I’m extracting some code to form a library, make it as “drop in” as possible, with excellent default settings.

I write and build software for others to enjoy. It’s time I started ensuring the things I make reflect that.

Building on a Shaky Codebase

Yesterday I resolved to thoroughly test my current project before building any new features.

Figured the best place to start would be in a control flow library I wrote, which is used all over the API server. I use it to make complex processes manageable and readable.


Comments are practically redundant because the name of each step explains what the snippet of code does.

I use the library all over the codebase, so it makes sense to thoroughly test it and proceed from there.

Imagine how humbled I was when it failed 20% of my tests.


I may have gotten away with this for as long as I was the only person using it, but as soon as anyone else decided to use cjs-task, they had a 1 in 5 chance of having it fail them unexpectedly.

I’ve been building on a shaky codebase.

Time to pass all the tests. Then write some more.


Test It Before You Wreck It

I have an embarrassing programming secret. Please don’t judge me harshly.

I don’t use automated tests.



I’m not a complete moron. Whatever I’m currently working on is thoroughly tested manually before being committed in git. And I commit pretty frequently with descriptive commit messages. I also fork and merge religiously.

I haven’t really felt the need to set up automated tests. Not til yesterday.


I’m currently building an API server for a project. An offline-capable smart spreadsheet browser app.

Made some updates to the client and wired it up to the API endpoint to test the feature and the server came crashing down. My changes were to the client, which was in a separate repo, so I didn’t change anything on the server. Restarted the API server and tested it again. Crash and burn.

What went wrong?

A quick investigation pointed to when I renamed a few variables to make things more readable. I didn’t change every instance of a particular variable. Two slipped by me. I didn’t even realize there were two instances til I fixed one, pushed the change up and the server crashed again.


It’s supremely easy to make mistakes or not check as thoroughly as you think, which introduces bugs into you code.

Manual testing is okay, but you can’t count on it to catch every single problem. And every single problem matters in production. It’s the difference between a well-running app and steaming pile.

No more relying on myself to make sure everything is working properly. I’m writing a complete test suite for the server before I build any further.

From now on, I’ll test it so I don’t wreck it.


How I Built My First Desktop App in 3 Days

“Twelve scanners! For that Sheraton job there were three scanners and it was too much work to review properly. The workload is going to be insane.”

Homeboy, the pessimist he is, was highlighting the downside of his company’s most recent contract. To be fair, he was one of the supervisors on the Sheraton gig and was literally drowning in the work, so the pessimism wasn’t unmerited.

I’d dropped by to return a flash drive and somehow ended up in a lengthy discussion about his challenges at work.

“What makes a supervisor’s role so hard?”

“We need to make sure the scanners are doing their job correctly. We were excited about almost completing the last job when we discovered a problem two scanners made weeks ago. We had to spend an extra month fixing it up.”

“What exactly do supervisors do?”

“We check to see if the scanned documents tally up with the report sheet the scanner produces. A document is a collection of PDF files. We make sure the PDFs are in the right document and have the right number of pages.”

“That sounds tedious.”

“In a week, a scanner can generate up to a thousand PDFs. It’s very easy to make a mistake and put a file in the wrong document or write the wrong page tally. It’s our job to catch and correct those mistakes.”

I stood in silence as I absorbed the gravity of the situation. 1000 PDFs per scanner is a frightening amount of data to manually verify. Pessimism must be contagious because I felt myself coming down with it. Making software is tricky, but it certainly wasn’t a tireless grind like this.

Maybe I could build something that could help him. He could select the folder containing the documents and it will determine what PDFs are in what documents, how many pages each PDF has and generate a report.

“I’ll build something to help you this weekend. Don’t sweat it”.

“Dude I’ll never ask you for your sister’s phone number again in my life if you do”.


I’ve only built web and browser apps, but everything about this sounds like a desktop app.

It can’t be a browser app. There aren’t APIs in the browser that allow me to crawl a client’s system. It can’t be a web app either. Uploading that many PDFs is seriously time-consuming and data intensive.

If I could get Node.js on his system, I can treat his computer like a server and build a web interface for him to interact with. Made for his computer, using web technology.

My search led me to Electron. Desktop applications built with Node.js? Everything I need to build something for a desktop with tools I use daily.

Getting “Hello World” up and running in Electron was a piece of cake. The documentation gave me enough information to do so on the first page. The code was easy to reason about and immediately dealt with the finer points of what differentiated electron apps from regular Node.js apps.

There’s a main process and a render process. Think of main as the server and render as a browser.  In a traditional web application, all the brains are on the server and the client browser talks to the server by sending text via HTTP. In Electron, the render process talks to the main process by sending text through a package called ipc.  You coordinate both processes via ipc to accomplish your object just like the browser and server are coordinated via http requests.

Once I understood this much, the plan was crystallized. Use node’s fs package to do the crawling, use comma-separated-values to generate the report excel doc and use the interface to collect input from Homeboy.

Time to start wiring this thing up!


Being able to drag and drop the folder containing the documents would be most intuitive way to get the document location, so I started building from there.

Normal browsers have a security model that prevent you from getting a file’s full path with the File API, but it was there for the taking in Electron.

Used ipc to send the paths to the main process and started writing the script to crawl the paths.

The first version I wrote was procedural. Get a list of folders and retrieve their content.  If any are PDFs, report them. If they are folders, go into those and do the same thing.

No matter how deeply nested the folder structure was or how files there were, it returned the information in less than a second. Amazing fast!

Every virtue the first version had was made irrelevant by how unusable the user interface was. It had one line of unstyled text that said little more than “drop folders here”. When you do, it creates some barely-styled boxes with the folder path, number of files and number of PDFs. It worked though, so I was super excited to demo it to him.


Homeboy watched it run through the test data and display the results.

“Can it tell me how many pages are in the PDFs?”

“That’s the very next feature I’m adding.”

“Okay. It needs to be able to tell me how many pages are in the PDFs and create an Excel sheet with that information.”

Was it too much to expect a little gratitude at this stage? No matter. I’ll just use a package from npm to get the page count. Shouldn’t take me more than 15 minutes max.

Two npm packages and as many hours later, still no page counts.


My first attempt was using an aptly named package: pdf_page_count. The package kept returning with “undefined” no matter what I did. Homeboy’s periodical “is it done yet?” didn’t help matters. After taking a look at how many people downloaded it recently, I came to the conclusion that the package was broken. Needed to find an alternative.

pdf2json seemed to fit the bill. Way more uses and downloads than pdf_page_count, so I installed it and gave it a shot.

No such luck.

I was so close! All I need is the page counts and I can output out on the interface and generate the excel report.


It didn’t help that the instructions for pdf2json  were woeful, so it felt like I was listening to the wrong event or setting the listener on the wrong object or something.

In my desperation I started stepping through the source code and adding log statements so I can follow the process through and see what’s wrong.

I was listening to the wrong object. The right one nested inside. Quick change, restart app and … nothing.

Turns out the information isn’t passed to the nested object. You have to check the outer one again. Quick change, restart app and … still nothing?!?


Oh wow. Turns out I was only passing the name of the PDF, not where it was located. Quick change, restart app and …. PAGE NUMBERS!

Quickly exported the app to the other laptop for him to test and proceeded to watch it hang up his computer for five minutes before producing the report.

“Is it exporting to Excel yet?”


Though I was able to extract page numbers, I was completely unsatisfied with its performance. It took an unreasonably long time to get.

To make matters worse, it completely hung up the system while getting the page counts. Forget multi-tasking. The mouse barely responded to movement and you certainly couldn’t click on anything.


pdf2json grabs a whole lot more than page counts. It tells you the height of each page, what is contained on every line (vertical AND horizontal), text and so much more. Useful information for someone else but totally irrelevant to Homeboy’s needs.

I took another look at pdf_page_count, armed with the knowledge that its failure was most likely due to me not passing the right page path.

A quick swap later it was running correctly. Way faster too! Understandable, since it’s a one-trick pony. Or a one-trick Veyron, speed improvement considered.

Processing the PDFs still hung up the system though. The app would navigate to the folders and go as deeply as it could, sorting between directories and files and trying to get the page numbers from the PDFs. My sample data has 2999 PDFs in total and the current architecture processes them all at the same. No wonder the system becomes completely unusable.

I created a queue where the PDFs to process are placed. I instructed the app to process a maximum of 8 at a time (get it? 8 looks like an upright mobius strip). The end result is not as many PDFs are processed at once. Definitely wouldn’t be as fast as opening all 2999 at once, but the system doesn’t experience any performance degradation. He’d be able to drop a bunch of folders to be processed and can muck around on Facebook, check back in 5 minutes and it’s all done.


I could have aimed for feature complete at this point and done the Excel exporting but the unusable user interface bothered me far too much. I knew when the app was processing and when it stopped simply because I had a console attached to the app. This shouldn’t be background information. This is data the app user needs to know, even if they don’t know they need it.

I wired up ipc to log how many PDFs were queued up twice in a second so the user knows what’s left to process and how fast the system is running through it. I put an indicator at the top left that’s green when there is no processing going on, red with a pulsing orange light when its busy. A quick glance should be good enough to let you know what exactly is going on.

Not quite the most beautiful swan, but this was a Sunday afternoon. Being feature complete at the end of the day was far more important than making it the most pretty thing. Enhancements could wait.


I left the Excel report til very late because I’d just worked on a Meteor app with Adim and a feature I worked on was exporting datasets to Excel. My understanding of the problem space and what it takes is still very fresh.

Generating the data wasn’t a problem, but the method of export is slightly different than it would be in a browser. Data URIs were not working in Electron, so I pushed the data back to the main process and wrote the file out with Node fs package.

Admittedly the workaround didn’t come to me immediately but I realized I’m not alone, searched and found a post online detailing the problem as well as a possible solution and figured out exactly how to do it in my app. Problem solved, feature complete.



“Duuuuuuuude. This thing is awesome! You don’t know how much easier you just made my life. Thanks a lot man!”

Might be because I was expecting some pessimism but I was extremely pleased to hear how happy he was with it. He’s taken it to his office and is currently giving it out to his coworkers to use. I’m totally floored.

There’s only a handful of people using things I’ve made or libraries I’ve written, so every additional person enjoying what I’ve worked on brings a feeling of joy and motivates me to work harder to build more things.

Definitely going to be build more things and writing about it 🙂

Serially Iterating An Array, Asynchronously

Meet Derick Bailey. He runs Watch Me Code, has written a few programming books and blogs a fair bit, so he’s always on my radar when I need to learn a little bit more about programming.

A few months ago, he wrote a post, Serially Iterating An Array, Asynchronously, which was about a very specific programming issue he had to solve.

I recently found myself looking at an array of functions in JavaScript. I needed to iterate through this list, process each function in an asynchronous manner (providing a “done” callback method), and ensure that I did not move on to the next item in the list until the current one was completed.

Wait a minute … I faced that same problem a while ago!

My Node.js Express server routes have a list of steps they need to complete before sending out a response to the client.

The /signup route verifies the email address through an external service, bcrypts the password, stores the information in the database, retrieves the insert id and sends the information over to the client.

None of those operations are synchronous. I can’t run them in parallel either, since some operations depend on the result of a previous one. It’s pointless to continue if any step fails.

A serial list of functions that need to be processed asynchronously.

Derick posted his solution to the problem and where he felt it could be improved. My solution addresses quite a few of those, so I thought I’d share.


available on github and npm


Couple points to highlight here.

1. task.end can be used to end a task prematurely. If any of your steps fails, use it to bring the task to a halt instead of wasting more time and resources.

2. task.next is a step control mechanism.

3. task.set / task.get can be used to store and return data relevant to the task at any step. Use it to store the initial data set, keep api responses, set flags … anything really.

4. Under the hood, task.end  nulls the data store, the task list and the callback list after the final callback has been triggered, in an attempt to prevent memory leaks.

5. Derick uses a destructive process on the task steps list. I simply keep track of the current index and increment after each task.next .

6. Under the hood I’m backing cjs-task with a pubsub, so I can trigger events for updates to the data store as well as when steps are triggered. Currently not implemented and probably not necessary, but I feel it’d add tremendous value and make it easy to monitor or modify your task instance. Most likely just an excuse to justify using the pubsub instead of something dead simple like a hashmap.

The longer I look at this, the more they look remarkably different, despite the similar API, job to be done and identical operation loop. Interesting.

Looks like someone just released queuer.js which looks like a hybrid approach. Combines the kind of event hooks I was looking to build into mine and Derick’s approach to handling data. 

Assembling a Practical JavaScript Toolkit

I was motivated by an opportunity to win N1,000,000 (around $5000) in a programming challenge this weekend.

The challenge was hosted on Codility’s platform. I’d never heard of their service before, let alone used it, so I really didn’t know what to expect going in.

On starting the timed test, the most notable constraint was my inability to import packages from github, npm or http. Maybe Codility lets you do it, but I didn’t have time to experiment and there was no clear way to do so in the interface.

No github. No npm. No libraries.

The thought of writing JavaScript without jQuery, Underscore or access to random libraries is very intimidating for many programmers.

It’s not every day you’ll experience such constraints, but you must be prepared to work under peculiar conditions. Your toolkit must be flexible enough that there’s hardly an environment you wouldn’t be productive in.

All the questions in the coding challenge dealt with manipulating collections of data and teasing out results from them. Interestingly enough, I’d written a blog post on Writing Reusable JavaScript a little while ago. The purpose of the reusable code I’d written? Iterating over collections of data and teasing out results from them.

Could I have done the same thing using jQuery, Underscore, Lodash or some other package out there? Yes.

However I wrote my own function to understand what’s going on behind the scenes much better. Because I did, I knew the solution a bit more intimately and could pare down my tool to the bare necessities or add needed bells and whistles on demand. I can perform these operations in JavaScript environments without those libraries or fancy new JavaScript language properties.

When building your code toolkit, don’t be so dependent on ideal environments. You may work in a code base that doesn’t use the lastest ecmascript features. You may not be allowed to use a transpiler. You may not even have access to packages.

You need to understand the principles of what your tool do and how to accomplish it with the barest of necessities: a text editor and your wits.

If you find yourself heavily dependent on something, try to spend a little time creating a dependency-free version of it: a version that you can copy and paste and use without relying on specific libraries environment properties.

When I fell in love with the ability to decouple code with publisher/subscriber pattern, I wrote my own pubsub. As my needs grew, I evolved my pubsub into cjs-noticeboard (pubsub with a noticeboard pattern). Today, my server-side and client-side code use cjs-noticeboard extensively.

Maybe I’m biased towards my own tools because I built them. I think what’s more important is that I understand exactly how they work and that I can take them anywhere. I can put them in IE6, a Meteor.js app or in a Codility challenge and know they’ll work exactly as expected.

You don’t have to build your own tools but you must understand how to carry them with you to all sorts of environments. Not every environ will be ideal, but you need to be prepared to kick ass no matter the conditions.

I’ll conclude with some of my Considerations for Assembling a Practical Javascript Toolkit.

1. Is this dependent on a specific version of JavaScript? (do I need a transpiler or shim to get this working everywhere?)

2. Is this dependent on a library? (Can this work without jQuery or Underscore or a specific framework?)

3. Is this single-purpose? (does it come with baggage I don’t need?)

4. Is it something I can make by myself? (do I understand the principles behind it or is it magic?)

Oh yeah … if I win the money I’ll be sure to write more about it 😀

Deal With It

It’s rational to expect a solution to be big if the problem is big, right?

Sound rationale but inherrently flawed. Big things are simply a collection of little things working together. If one little thing stops working as expected, it could bring down the whole thing with it.

You would expect that someone who makes a living assembling little things together to make big things to understand this intuitively, but you would be wrong. So very wrong.

I wrote about a problem I was facing with an app I built. I didn’t have much time to dedicate to investigating and debugging the issue, and since it was non-critical I decided to ignore it til I had time to dedicate to the matter.

When I finally got around to tackling the problem, I was amazed to find out the solution was literally one line of code.

Continue reading Deal With It

Writing Reusable Javascript

It’s very easy to lose sight of the big picture when writing Javascript. Your attention is spent entirely on the little details of the current bit of code you’re writing, not why you started writing them in the first place. This isn’t necessarily a bad thing. What’s problematic is doing it repeatedly for the same type of problem.

I’ll walk you through how I wrote a reusable function, how it evolved and my thought process the entire journey, so you can recognize the patterns and know when it’s time to make something reusable.

I’m working on a webapp that collects information from different members of an organization and displays it on their public website. This means much of the work is data storage and processing. Majority of my code will be looping over data structures and doing something in the loop.

No big deal writing it the first or the next time. By the fourth or fifth time though, not quite as fun. All that code is trying to say is take every item in this array and do something with it. Decided to make it a function since even the most verbose function name is a lot shorter and more intuitively understood than the first two lines of that snippet.

Every time I need to loop over an array, it now looks so much neater.

So far so good. My array_each function makes my code very readable and does everything I need. For now.

The next time I need to revisit my function is when I need to loop over a data structure and return the first match for a specific criteria. Normally I can use break to end the loop, but my process item isn’t actually in the loop. I need a way to signal from my process_item function that the loop needs to trigger a break.

To stop a loop, all I need to do is return false from my process_item function.

Best part is that it’s still backwards-compatible! I don’t need modify my previous process_item functions because the new functionality triggers only when process_item returns false.

This is pretty good as is, but it can be better. Sometimes I need to know item’s position in the array. Need to make sure its available when I need it.

I’m passing the current item’s index to the process_item function as a second argument. My previous functions aren’t broken by this update. Javascript is cool enough to let me pass any amount of arguments to a function, even if the function doesn’t use all (or any) of them. Most times I’d be dealing directly with the item, which is why I’m leaving it as the first argument.  The index is passed as a secondary argument so it’s there when I need it, but can be safely ignored when constructing process_item functions that don’t need it.

At this point I’m thoroughly pleased. I’m comfortable processing an array, stopping when I want and using my current index in the array to do other things. I’m pretty confident I wouldn’t need to modify this.

Unfortunately I set myself up for failure right from the beginning by forgetting something very important: arrays are not the only type of data collection.

Objects are conceptually similar to arrays, but different enough to make all the difference in the world.

1. Object keys are not necessarily numbers.
2. Objects don’t come with a built-in length property that tells us how many attributes it has.

This means it’s not possible to retrofit array_each to handle objects because it needs to know how many properties it’s looping through and walks the array by increasing a numerical index. Bummer.

I need to make an object_each with a similar interface to array_each, so that my process_item functions are identical regardless what type of data structure is being passed.

Instead of using a for loop that increments the index being accessed in the array, I’m using a for .. in loop that iterates over the properties of an object. object.hasOwnProperty checks if the property being accessed actually belongs to the object or is inherited. I only care about the data directly in the object, so this is how I filter out the other properties objects come loaded with.

I can safely pass the property name to object_each‘s process_item because accessing an item in an object and an array are interchangeable.

Here’s an example of what object_each can do.

Looking pretty concise! The code is neat and readable. I could leave it like this, but notice object_each and array_each have the exact same function and interface. I can create a wrapper around both of them. The wrapper will determine which function is more appropriate and use it to process the data structure handed to it. Throw a few checks to make sure everything is kosher and voila! One function to rule them all.

A few things going on here. First off, each makes sure it is being passed what is needed to work and confirms the item processor function is a function. Next, it detects what type of data is being passed to it to determine which function is appropriate – array_each for arrays and object_each for objects. I set each to default to using object_each. In Javascript, pretty much everything is an object. I can safely assume that if you pass an array, you’re interested in the array’s content. If you pass any other thing, you’re interested in the object’s properties. A notable exception is string content-type, which behaves pretty much like an array.

My previous example now looks like this:


Pulling it all together into a single reusable function results in:

I can do further optimizations and cleanup, but that’s for another day and another post. The current state of each scratches my itch, so there’s no immediate need for me to optimize further.


I really hope this glimpse into my thinking and coding process has been helpful 🙂