Using RequireJS With Angular

Since attending Fluent Conf 2013 and watching the many AngularJS talks and seeing the power of its constructs, I wanted to get some experience with it.

Most of the patterns for structuring the code for single page webapps, use some sort dependency management for all the JavaScript instead of using global controllers or other similar bad things. Many of the AngularJS examples seem to follow these bad-ish patterns. Using angular.module(‘name’, []), helps this problem (why don’t they show more angular.module() usage in their tutorials?), but you can still end up with a bunch of dependency loading issues (at least without hardcoding your load order in your header). I even spent time talking to a few engineers with plenty experience with Angular and they all seemed to be okay with just using something like Ruby’s asset pipeline to include your files (into a global scope) and to make sure everything ends up in one file in the end via their build process. I don’t really like that, but if you are fine with that, I’d suggest you do what you are most comfortable with.

Why RequireJS?

I love using RequireJS. You can async load your dependencies and basically remove all globals from your app. You can use r.js to compile all your JavaScript into a single file and minify that easily, so that your app loads quickly.

So how does this work with Angular? You’d think it would be easy when making single page web apps. You need your ‘module’ aka your app. You add the routing to your app but to have your routing, you need the controllers and to have your controllers you need the module they belong to. If you do not structure your code and what you load in with Require.js in the right order, you end up with circular dependencies.


So below for my directory structure. My module/app is called “mainApp”.

My base public directory:

directory listing

- javascripts
- controllers/
- directives/
- factories/
- modules/
- routes/
- templates/
- vendors/
- stylesheets/

Here is my boot file, aka my main.js.


baseUrl: '/javascripts',
paths: {
'jQuery': '//',
'angular': '//',
'angular-resource': '//'
shim: {
'angular' : {'exports' : 'angular'},
'angular-resource': { deps:['angular']},
'angular-route': { deps: ['angular']},
'jQuery': {'exports' : 'jQuery'}

require(['jQuery', 'angular', 'routes/mainRoutes'], function ($, angular, mainRoutes) {
$(function () { // using jQuery because it will run this even if DOM load already happened
angular.bootstrap(document, ['mainApp']);

You’ll notice how I am not loading my mainApp in. Basically we are bringing in the last thing that needs to configured for your app to load, to prevent circular dependencies. Since the Routes need the mainApp controllers and the controllers need the mainApp module, we just have them directly include the mainApp.js.

Also we are configuring require.js to bring in angular and angular-resource (angular-resource so we can do model factories).

Here is my super simple mainApp.js

UPDATE – If you plan on using routing, your app needs the ngRoute module loaded.

Thanks for Ryan for pointing this out!


define(['angular', 'angular-resource', 'angular-route'], function (angular) {
return angular.module('mainApp', ['ngResource', 'ngRoute']);

And here is my mainRoutes file:


define(['modules/mainApp', 'controllers/listCtrl'], function (mainApp) {
return mainApp.config(['$routeProvider', function ($routeProvider) {
$routeProvider.when('/', {controller: 'listCtrl', templateUrl: '/templates/List.html'});


You will notice I require the listCtrl, but actually use its reference. Including it adds it to my mainApp module so it can be used.

Here is my super simple controller:


define(['modules/mainApp', 'factories/Item'], function (mainApp) {
mainApp.controller('listCtrl', ['$scope', 'Item', function ($scope, Item) {
$scope.items = Item.query();

So you’ll notice, I have to include that mainApp again, so I can add the controller to it. I also have a dependency on Item, which in this case is a factory. The reason I include that, is so that it gets added to the app, so the dependency injection works. Again, I don’t actually reference it, I just let dependency injection do its thing.

Lets take a look at this factory really quick.


define(['modules/mainApp'], function (mainApp) {
mainApp.factory('Item', ['$resource', function ($resource) {
return $resource('/item/:id', {id: [email protected]'});

Pretty simple, but again, we have to pull in that mainApp module to add the factory to it.

So finally lets look at our index.html, most if it is simple stuff, but the key part is the ng-view portion, which tells angular where to place the view. Even if you don’t use document in your bootstrap and opt to use a specific element, you still need this ng-view.


<!DOCTYPE html>
<title>Angular and Require</title>
<script src="/javascripts/require.js" data-main="javascripts/main"></script>
<div class="page-content">


TDD JavaScript With Require.js and Teabag on Rails


Test Driven Development, or TDD, is the process of writing tests before writing your code. Or minimally, testing the code that you write.

Since starting to use require.js, I found it annoying that there were not many test runners in rails that would support require. I did find that Teabag had partial require support by defering the execution of tests until certain assets are loaded.

However, once we integrated Teabag, we found that it didn’t always wait correctly. We also needed to hardcode or use our templating language output JavaScript that would include all our tests, even though Teabag knew what all our tests were. So like anyother developer, we modified Teabag, added a use_require option to a test suite, then sent them a pull request on GitHub.

Our modifications basically used require itself, to include the found tests (see: suite.matcher) and pull them in, then once loaded, run Teabag.execute. This guarantees that all the dependencies and such are loaded, because it uses require itself.

Setting up Teabag

First we need to add the gem to your Gemfile.


# you may have to add :github => 'modeset/teabag' to get the latest
gem 'teabag'

Now run their rake installer. In this case I will use mocha.

$ rails generate teabag:install --framework=mocha

Then setup your test suite.


# This isn't the only thing in your teabag.rb
# just how to activate using require in the suite portion
config.suite do |suite|
# Activate require
suite.use_require = true

# File Matcher
suite.matcher = "{spec/javascripts,app/assets}/**/*_spec.{js,,coffee}"
# there is a lot more including your spec helper

Now we will include chai into our spec helper. You can use any assertion library you want. I just like chai. We are also bringing in application.js though all it is doing for me is including any vendors assets, so you might not need to do this.


//= require support/chai
//= require application

Writing a test

Now let’s write a simple test. First we will add a model to app/assets/javascripts/Model.js


define([] , function () {
var Model = function (firstName , lastName) {
this.firstName = firstName;
this.lastName = lastName;
Model.prototype.getFullName = function () {
return this.firstName + ' ' + this.lastName;
return Model;

Now you can see the model takes two params, a first name and a last name, then has a single function called getFullName, which concats the two.

Now we will write our spec in spec/javascripts/Model_spec.js. The reason we use _spec.js suffix, is so that Teabag can find it, via the suite’s matcher setting.


define(['Model'] , function (Model) {
describe('Model' , function () {
it('#getFullName should provide the full name' , function () {
var m = new Model('Inline' , 'Block');
chai.expect(m.getFullName()).to.equal('Inline Block');

Now we will startup our rails server and hit up the the /teabag in our browser and it should run and execute our test.

Using GitHub for DotFiles

Most developers have a very specific setup they use, one that they are most comfortable with, while writing software. When moving from on computer to another, some developers can take hours to get their environment setup.

For me, I really need vim and a bunch of plugins to really feel effective.

Something many developers do, is store their setup files and configuration options on GitHub. We will call them “dot files”, since most *nix based systems, have their configuration options prefixed with a ‘.’ dot.

Using GitHub to do this is a great idea. Since most developers configuration is constantly evolving and we want to try out new things. With git you can go back if you find that some new setup you add isn’t working out.

What I suggest is to make a repo on GitHub, clone it down to your computer then drop your files in there, sans the dot. So your .bashrc will end up being just bashrc, that way it doesn’t show up hidden by default. Then just symlink that file to your home directory.

$ symlink bashrc ~/.bashrc

Everytime you make changes to your bashrc, vimrc, inputrc or anything else you might use, just commit your changes and push it back up to GitHub.

You can also get more advanced and write a setup script. Mine does all the symlinking for me, then installs vundler and tells it to pull down all the vundles I use.

Hopefully this will help you streamline the process of getting started when you get a new computer or new VM or even server.

You can checkout my dot files on GitHub.

The Perfect Parser

So I’ve been silently proud (until now) about a few number/money parsers, URL parsers and such that I’ve written.

One reason why I am so proud of them, is that they are NOT perfect. However, they are short and sweet and cover, my guess is, about 99.5% all cases of data they are given and produce the proper results. If someone is trying to be malicious, they can be, but that won’t get them too far, since I encode everything I can remember to, into-db and outto-html. Basically 99% of my DB queries aren’t written by hand and are created via my ORM (SimpleModel), so that helps me from getting exhausted by writing queries, which typically leads to flaws.

There we go. I’m happy to have taken only few minutes to parse money and urls.

Zappos Redirect Exploit

So I saw this interesting link bait on my Facebook feed, but the source of it happened to be I thought it could of been from a Zappos blog, since those guys happen to be pretty hip and could of shared it, however when I clicked the link it redirected me to a seemingly malicious site meant to spread more of this malicious page.

Now I decided to see if I could replicate this, just to make sure I wasn’t crazy.

Turns out it was easy, you only need to replace the tgt parameter to any encoded URL and it will redirect you.

Now the value of this exploit about as equal to the value of the TLD. If people trust that URL then you can easily convince them to click it. Sharing it on Facebook, Twitter or any other social network would be easy and make it look legit. Possible uses are for phishing or specifically phishing for info to just feed the browser an exploit and possibly compromise their machine.

EDIT: After getting an email from Joe Levy, he told me to do a Google search (inurl:“target=http”, and variants thereof) to find new URL’s that could be potentially exploitable and within a minute I found that and Autodesk also had easy to exploit redirect scripts.