My dear ………i love you soo much

For you my dear valentine S,

On this special day among the special days ‘The valentine Day‘.

Ever since the day you came within my eyesight, other things have been invisible to me. And in passing our lives up to this stage, i feel more the essence of you in every moments. You are in the retina of my eye if i open or close my eyelid. RBC of my blood used to become your structure and circulate all over my body which makes good juxtaposition of your image inside my heart which makes me feel that you are my intimate.

You are the beam of moonlight and fairy amongst the roses. I see you not only in the dream but also feel as well. I then become astonished thinking that your image is really mingled in my heart and i think you the bolster of me. That time, my heart gets full of happiness since you are my heartbeat.

There is some magic in the way you talk, deal with me. In your light, i learn how to love, in your beauty i learn how to make poems of love. You dance inside my heart where no one sees you.

I see you as a representation of pure aesthetic beauty and in looking at your loveliness, i feel an outburst of profound joy. Your feminine force has shaped me, delights me and seduced me unto love. It is simplicity, desire, trust, understanding, sacrifice, romance and a relentless stream of love.

Loving is constantly evolving. So, its a concatenation of endless dreams and desires. You used to love me in a most graceful , mysterious yet profound way. So, it is a romantic filled with vivid imagination that will be delivered in this dynamic life as everyday as a special day by means of love, love and love AND trust, trust and trust.

A strange passion is moving in my mind and my heart has really become a lovebird today so that i can be with you on this special day. Is it really so that you, whom i love is everywhere today.

Lastly, mutual heart we shared, times we spent mutually or singly, love we have, trust and understanding we made make me feel lucky to be the part of your life. i feel the best that i will get you as my soulmate.

Out beyond the ideas of wrongdoing and right doing there is a place you know i will meet you there on this special day virtually.

Your Loving

Desire

 

Advertisements

An Inside Look at the Architecture of NodeJS

ABSTRACT
This paper will identify and describe the various architectural features of NodeJS, a popular software platform used for scalable server-side networking applications on the web.
General Terms : Web Development
Keywords : NodeJS, Server-side networking, Web development, Event-driven, Asynchronous, Single-threaded
1. INTRODUCTION
Initially released in 2009, NodeJS set out to revolutionize web applications. Its creator, Ryan Dahl, sought to give web developers the opportunity to create highly interactive websites with push capabilities in order to maximize throughput and ecfficiency. Today, dozens of companies including LinkedIn, The New York Times, PayPal, and eBay utilize Node’s evented I/O model in order to power their large network programs on the web. Just a few years ago, the web was a stateless environment based on the stateless request-response paradigm. Most interactive features were encapsulated within Flash or Java Applets, as isolated units within a web environment. Node allows web applications to establish real-time, two-way connections. Its major advantage lies in using asynchronous, event-driven I/O, thus remaining lightweight and efficient when managing a data-intensive application distributed across multiple devices. Node is currently sponsored by Joyent, a software company specializing in high-performance cloud computing. Since its initial release as a Linux-only software platform, Node has acquired compatibility with Mac OS X, Windows, Solaris, FreeBSD and OpenBSD operating systems. Contributions to the code base are made regularly by just over a dozen developers via GitHub. With nearly one commit per day, Node’s fast-paced development has led to over 230 releases in just over 4 years. Nonetheless, it is imperative to note that Node has yet to release a version 1.0[4]. The project is currently open-source under the MIT license. With larger clients looking to integrate NodeJS into their mobile platforms, this relatively new technology is increasingly in influenced by enterprise application in comparison to its initial popularity among autonomous developers. This trend is certain to persist as Node gains popularity, but it is doubtful to replace solutions provided by Java and .NET with much more significant worldwide investment. This text will give an overview of the architecture of NodeJS, with a focus on a handful of key ideas that have led to its widespread adoption.
2. TECHNOLOGY
NodeJS is divided into two main components: the core and its modules. The core is built in C and C++. It combines Google’s V8 JavaScript engine with Node’s Libuv library
and protocol bindings including sockets and HTTP. 2.1 V8 Runtime Environment Google’s V8 engine is an open-source Just In Time , or JIT, compiler written in C++. In recent benchmarks, V8’s performance has surpassed other JavaScript interpreters including SpiderMonkey and Nitro. It has additionally surpassed PHP, Ruby and Python performance. Due to Google’s approach, it is predicted that in fact it could become as fast as C. The engine compiles JavaScript directly into assembly code ready for execution by avoiding any intermediary representations like tokens and opcodes which are further interpreted. The runtime environment is itself divided into three majors
components: a compiler, an optimizer and a garbage collector.
2.1.1 Compiler
The compiler dissects the JavaScript code provided, extracting relevant commands. A built-in profiler identifies portions requiring optimization and sends these to the optimizing module.
2.1.2 Optimizer
The optimizer, known as Crankshaft, constructs an Abstract Syntax Tree , or AST using targeted code. It is then translated to a Static Single Assignment , or SSA representation and optimized.
2.1.3 Garbage Collector
V8 divides memory into two categories: the new space and the old space. Both are located in the heap and used to keep track of JavaScript objects as referenced by pointers.
Any new object is added to the new space. When the new space reaches a size threshold, the garbage collector will remove any “dead” objects from the new space and store them within the old space. Although the garbage collector disallows manual memory management and may slow the web application, it is a necessary component in order to maintain a lightweight JavaScript code base.
2.2 Libuv
The C++ Libuv library is responsible for Node’s asynchronous I/O operations and main event loop. It is composed of axed-size thread pool from which a thread is allocated for
each I/O operation. By delegating these time-consuming operations to the Libuv module, the V8 engine and remainder of NodeJS is free to continue executing other requests. Before 2012, Node relied on two separate libraries, Libio and Libev, in order to provide asynchronous I/O and support the main event loop. However, Libev was only supported by Unix. In order to add Windows support, the Libio library was fashioned as an abstraction around Libev. As developers continued to make modi cations to the Libev library and its Libio counterpart, it became clear that the performance increases sought would be more appropriately addressed by making an entirely new library. For instance, Libev’s inner loop was performing tasks unnecessary to the Node project; by removing it, the developers were able to increase performance by nearly 40%. Libev and Libio were completely removed in version 0.9.0 with the introduction of Libuv.
2.3 Design Patterns
Node relies heavily on Object Pool, Facade, and Factory Design Patterns. Other, less prominent design patterns which appear throughout both Node and its V8 component include the Singleton and Visitor pattern. The majority of Node’s main thread is tightly coupled to the V8 engine through direct function calls. Most of the design patterns found within the V8 component extend their behavior into Node.
2.3.1 Object Pool
As resources are limited when performing I/O operations, NodeJS heavily relies on the Object Pool design pattern in order to maintain a centralized memory management system. Object pools are implemented as a list of objects available for a specific task. When one is required, it is requested from the pool manager. This design pattern is applied to Libuv’s thread pool.
2.3.2 Facade
A facade is an object that provides a simplified interface to a larger body of code, such as a class library. Within NodeJS, Libuv acts as a facade around the smaller Libev and Libio libraries, the rest of which allows for asynchronous I/O and the latter Node’s event loop. With this structure, the library provides support for both Windows and Linux systems.
2.3.3 Singleton
A singleton class restricts the instantiation of a class to one object in order to better coordinate actions across a platform. NodeJS uses a singleton in its ArrayBufferAllocator
in order to keep track of allocations in a centralized location.
2.3.4 Visitor
The visitor design pattern allows for the separation of an algorithm from an object structure on which it operates. The V8 engine is responsible for keeping track of various array buffers { these are interpreted by different views according to different formats such as oats and unsigned integers. As a result, a visitor pattern was constructed surrounding the global array buffer.
2.4 NPM
The Node package manager, denoted npm, is the official package manager for NodeJS, written entirely in JavaScript. It comes bundled and installed automatically with the Node environment. It is responsible for managing dependencies for the Node application and allows users to install applications already available on the npm registry. Npm was initially developed in 2010 with the immediate goal for quick integration into the NodeJS platform. As of Node version 0.6.3 . Although it can function as a separate entity, npm is essential in keeping the small, easily maintainable nature of Node’s main core. By managing dependencies outside of Node, the platform focuses on its true objectives.
2.5 Influences
The NodeJS platform is heavily influenced by the architecture of the Unix operating system. The Unix OS is composed of a small kernel; it is complemented by layers of system calls, library routines and other modules. One of its main components consists of a concurrency-managing subsystem. This schema is reproduced in Node’s main thread, complemented by the Libuv component, all of which wrap around the V8 runtime environment. Node was further infuenced by the Ruby Mongrel web server, an open-source platform available on all major operating systems developed in 2008. Mongrel offered both an HTTP library and simple web server written entirely in Ruby; it was the rest web server utilized by social networking service
Twitter.
For Installation and Architectural Description and more, please follow the link below:

A letter to our daughter By Mark Juckerberg

mark
Dear Max,
Your mother and I don’t yet have the words to describe the hope you give us for the future. Your new life is full of promise, and we hope you will be happy and healthy so you can explore it fully. You’ve already given us a reason to reflect on the world we hope you live in.
Like all parents, we want you to grow up in a world better than ours today.
While headlines often focus on what’s wrong, in many ways the world is getting better. Health is improving. Poverty is shrinking. Knowledge is growing. People are connecting. Technological progress in every field means your life should be dramatically better than ours today.
We will do our part to make this happen, not only because we love you, but also because we have a moral responsibility to all children in the next generation.
We believe all lives have equal value, and that includes the many more people who will live in future generations than live today. Our society has an obligation to invest now to improve the lives of all those coming into this world, not just those already here.
But right now, we don’t always collectively direct our resources at the biggest opportunities and problems your generation will face.
Consider disease. Today we spend about 50 times more as a society treating people who are sick than we invest in research so you won’t get sick in the first place.
Medicine has only been a real science for less than 100 years, and we’ve already seen complete cures for some diseases and good progress for others. As technology accelerates, we have a real shot at preventing, curing or managing all or most of the rest in the next 100 years.
Today, most people die from five things — heart disease, cancer, stroke, neurodegenerative and infectious diseases — and we can make faster progress on these and other problems.
Once we recognize that your generation and your children’s generation may not have to suffer from disease, we collectively have a responsibility to tilt our investments a bit more towards the future to make this reality. Your mother and I want to do our part.
Curing disease will take time. Over short periods of five or ten years, it may not seem like we’re making much of a difference. But over the long term, seeds planted now will grow, and one day, you or your children will see what we can only imagine: a world without suffering from disease.
There are so many opportunities just like this. If society focuses more of its energy on these great challenges, we will leave your generation a much better world.
• • •
Our hopes for your generation focus on two ideas: advancing human potential and promoting equality.
Advancing human potential is about pushing the boundaries on how great a human life can be.
Can you learn and experience 100 times more than we do today?
Can our generation cure disease so you live much longer and healthier lives?
Can we connect the world so you have access to every idea, person and opportunity?
Can we harness more clean energy so you can invent things we can’t conceive of today while protecting the environment?
Can we cultivate entrepreneurship so you can build any business and solve any challenge to grow peace and prosperity?
Promoting equality is about making sure everyone has access to these opportunities — regardless of the nation, families or circumstances they are born into.
Our society must do this not only for justice or charity, but for the greatness of human progress.
Today we are robbed of the potential so many have to offer. The only way to achieve our full potential is to channel the talents, ideas and contributions of every person in the world.
Can our generation eliminate poverty and hunger?
Can we provide everyone with basic healthcare?
Can we build inclusive and welcoming communities?
Can we nurture peaceful and understanding relationships between people of all nations?
Can we truly empower everyone — women, children, underrepresented minorities, immigrants and the unconnected?
If our generation makes the right investments, the answer to each of these questions can be yes — and hopefully within your lifetime.
• • •
This mission — advancing human potential and promoting equality — will require a new approach for all working towards these goals.
We must make long term investments over 25, 50 or even 100 years. The greatest challenges require very long time horizons and cannot be solved by short term thinking.
We must engage directly with the people we serve. We can’t empower people if we don’t understand the needs and desires of their communities.
We must build technology to make change. Many institutions invest money in these challenges, but most progress comes from productivity gains through innovation.
We must participate in policy and advocacy to shape debates. Many institutions are unwilling to do this, but progress must be supported by movements to be sustainable.
We must back the strongest and most independent leaders in each field. Partnering with experts is more effective for the mission than trying to lead efforts ourselves.
We must take risks today to learn lessons for tomorrow. We’re early in our learning and many things we try won’t work, but we’ll listen and learn and keep improving.
• • •
Our experience with personalized learning, internet access, and community education and health has shaped our philosophy.
Our generation grew up in classrooms where we all learned the same things at the same pace regardless of our interests or needs.
Your generation will set goals for what you want to become — like an engineer, health worker, writer or community leader. You’ll have technology that understands how you learn best and where you need to focus. You’ll advance quickly in subjects that interest you most, and get as much help as you need in your most challenging areas. You’ll explore topics that aren’t even offered in schools today. Your teachers will also have better tools and data to help you achieve your goals.
Even better, students around the world will be able to use personalized learning tools over the internet, even if they don’t live near good schools. Of course it will take more than technology to give everyone a fair start in life, but personalized learning can be one scalable way to give all children a better education and more equal opportunity.
We’re starting to build this technology now, and the results are already promising. Not only do students perform better on tests, but they gain the skills and confidence to learn anything they want. And this journey is just beginning. The technology and teaching will rapidly improve every year you’re in school.
Your mother and I have both taught students and we’ve seen what it takes to make this work. It will take working with the strongest leaders in education to help schools around the world adopt personalized learning. It will take engaging with communities, which is why we’re starting in our San Francisco Bay Area community. It will take building new technology and trying new ideas. And it will take making mistakes and learning many lessons before achieving these goals.
But once we understand the world we can create for your generation, we have a responsibility as a society to focus our investments on the future to make this reality.
Together, we can do this. And when we do, personalized learning will not only help students in good schools, it will help provide more equal opportunity to anyone with an internet connection.
• • •
Many of the greatest opportunities for your generation will come from giving everyone access to the internet.
People often think of the internet as just for entertainment or communication. But for the majority of people in the world, the internet can be a lifeline.
It provides education if you don’t live near a good school. It provides health information on how to avoid diseases or raise healthy children if you don’t live near a doctor. It provides financial services if you don’t live near a bank. It provides access to jobs and opportunities if you don’t live in a good economy.
The internet is so important that for every 10 people who gain internet access, about one person is lifted out of poverty and about one new job is created.
Yet still more than half of the world’s population — more than 4 billion people — don’t have access to the internet.
If our generation connects them, we can lift hundreds of millions of people out of poverty. We can also help hundreds of millions of children get an education and save millions of lives by helping people avoid disease.
This is another long term effort that can be advanced by technology and partnership. It will take inventing new technology to make the internet more affordable and bring access to unconnected areas. It will take partnering with governments, non-profits and companies. It will take engaging with communities to understand what they need. Good people will have different views on the best path forward, and we will try many efforts before we succeed.
But together we can succeed and create a more equal world.
• • •
Technology can’t solve problems by itself. Building a better world starts with building strong and healthy communities.
Children have the best opportunities when they can learn. And they learn best when they’re healthy.
Health starts early — with loving family, good nutrition and a safe, stable environment.
Children who face traumatic experiences early in life often develop less healthy minds and bodies. Studies show physical changes in brain development leading to lower cognitive ability.
Your mother is a doctor and educator, and she has seen this firsthand.
If you have an unhealthy childhood, it’s difficult to reach your full potential.
If you have to wonder whether you’ll have food or rent, or worry about abuse or crime, then it’s difficult to reach your full potential.
If you fear you’ll go to prison rather than college because of the color of your skin, or that your family will be deported because of your legal status, or that you may be a victim of violence because of your religion, sexual orientation or gender identity, then it’s difficult to reach your full potential.
We need institutions that understand these issues are all connected. That’s the philosophy of the new type of school your mother is building.
By partnering with schools, health centers, parent groups and local governments, and by ensuring all children are well fed and cared for starting young, we can start to treat these inequities as connected. Only then can we collectively start to give everyone an equal opportunity.
It will take many years to fully develop this model. But it’s another example of how advancing human potential and promoting equality are tightly linked. If we want either, we must first build inclusive and healthy communities.
• • •
For your generation to live in a better world, there is so much more our generation can do.
Today your mother and I are committing to spend our lives doing our small part to help solve these challenges. I will continue to serve as Facebook’s CEO for many, many years to come, but these issues are too important to wait until you or we are older to begin this work. By starting at a young age, we hope to see compounding benefits throughout our lives.
As you begin the next generation of the Chan Zuckerberg family, we also begin the Chan Zuckerberg Initiative to join people across the world to advance human potential and promote equality for all children in the next generation. Our initial areas of focus will be personalized learning, curing disease, connecting people and building strong communities.
We will give 99% of our Facebook shares — currently about $45 billion — during our lives to advance this mission. We know this is a small contribution compared to all the resources and talents of those already working on these issues. But we want to do what we can, working alongside many others.
We’ll share more details in the coming months once we settle into our new family rhythm and return from our maternity and paternity leaves. We understand you’ll have many questions about why and how we’re doing this.
As we become parents and enter this next chapter of our lives, we want to share our deep appreciation for everyone who makes this possible.
We can do this work only because we have a strong global community behind us. Building Facebook has created resources to improve the world for the next generation. Every member of the Facebook community is playing a part in this work.
We can make progress towards these opportunities only by standing on the shoulders of experts — our mentors, partners and many incredible people whose contributions built these fields.
And we can only focus on serving this community and this mission because we are surrounded by loving family, supportive friends and amazing colleagues. We hope you will have such deep and inspiring relationships in your life too.
Max, we love you and feel a great responsibility to leave the world a better place for you and all children. We wish you a life filled with the same love, hope and joy you give us. We can’t wait to see what you bring to this world.
Love,
Mom and Dad

Create a Sails.js (node.js) App with basic CRUD features using MongoDb

 Hi guys,Here i am trying to share with you some working sample codes for performing basic operations(CRUD) in sails.js(node.js).Step1:Create a new sails.js app<code>
$ sails new sailsApp

$cd sailsApp

$npm install sails-mongo  //since, i have used mongodb as database.

</code>
Step2:Connect your app with MongoDb
Update the file in  /config/connections.js.

someMongodbServer: {
adapter: ‘sails-mongo’,
host: ‘localhost’,
port: 27017,
user: ‘username’,
password: ‘password’,
database: ‘mydb’ //database name
},
Update the file /config/env/development.js to

module.exports = {

models: {
connection: ‘someMongodbServer’
}

};

Step3:Create a new data model & its controller
<code>
$ sails generate model user
$ sails generate controller user
</code>

CREATE:
Step4:Add a new action “create” inside api/controllers/UserController.js

<code>
module.exports = {

create: function (req, res) {

if(req.method==”POST”&&req.param(“User”,null)!=null)
{

User.create(req.param(“User”)).done(function(err,model){

// Error handling
if (err) {

res.send(“Error:Sorry!Something went Wrong”);

}else {
res.send(“Successfully Created!”);
//res.redirect( ‘user/view/’+model.id);

}

});

}
else
{

res.render( ‘user/create’);
}

}

}
</code>

Step5:Add a create form under views/user/create.ejs
<code>
<a href=”/user/index”>List</a>
<h2>User Create form</h2>
<form action=”/user/create” method=”POST”>
<table>

<tr><td>FirstName<td><input type=”text” name=”User[fName]”><br/>

<tr><td>LastName<td><input type=”text” name=”User[lName]”><br/>
<tr><td>DOB<td><input type=”text” name=”User[dob]”><br/>

<tr><td>UserName<td><input type=”text” name=”User[userName]”><br/>

<tr><td>Password<td><input type=”text” name=”User[password]”><br/>

<tr><td>Email<td><input type=”text” name=”User[email]”><br/>
<tr ><td><td><input type=”submit” value=”ADD”>
</form>

</code>

READ:
Step6:Add an index action in UserController.js which can be used to list all created documents.

<code>
index: function (req, res) {

User.find().exec(function(err, users) {

res.render( ‘user/index’,{‘users’:users});
return;

});

}
</code>

Step7:Add an index.ejs view file for action index
/views/user/index.ejs
<code>
<a href=”/user/create”>+Create</a>
<ol>
<% users.forEach( function( model ){ %>
<li><%= model.fName %>(<a href=”/user/delete/<%= model.id %>”>delete</a>|<a href=”/user/update/<%= model.id %>”>Update</a>|<a href=”/user/view/<%=model.id %>”>view</a>)</li>
<% }); %>
</ol>
</code>
Step8:Add a view action under UserController.js
<code>
view: function (req, res) {

var id=req.param(“id”,null);

User.findOne(id).done(function(err,model){

res.render( ‘user/view’,{‘model’:model});

});

}

</code>
Step9:Add view file for action “view” under /views/user/view.ejs
<code>
<a href=”/user/create”>+Create</a>|<a href=”/user/index”>List</a>|<a href=”/user/update/<%= model.id %>”>Update</a>

<h2>View <%=model.fName%></h2>
<ul>
<li>FirstName:<%=model.fName %></li>

<li>LastName:<%=model.lName %></li>

<li>DOB:<%=model.dob %></li>

<li>UserName:<%=model.userName %></li>

<li>Password:<%=model.password %></li>

<li>Email:<%=model.email %></li>
</ul>
</code>

UPDATE:

Step10: Add a new action “update” inside api/controllers/UserController.js

<code>
update: function (req, res) {

var id=req.param(“id”,null);

User.findOne(id).done(function(err, model) {

if(req.method==”POST”&&req.param(“User”,null)!=null)
{

var usr=req.param(“User”,null);

model.fname=usr.fName;
model.mname=usr.mName;
model.lname=usr.lName;
model.dob=usr.dob;
model.username=usr.Username;
model.password=usr.password;
model.email=usr.email;

model.save(function(err){

if (err) {

res.send(“Error”);

}else {

res.redirect( ‘user/view/’+model.id);

}

});

}
else
{

res.render( ‘user/update’,{‘model’:model});
}

});

}
</code>

Step11:Add an update form under views/user/update.ejs
<code>
<a href=”/user/index”>List</a>
<h2>User #<%=model.fName %> Update form</h2>
<form action=”/user/update/<%=model.id %>” method=”POST”>
<table>

<tr><td>FirstName<td><input type=”text” name=”User[fName]” value=”<%=model.fName %>”><br/>

<tr><td>LastName<td><input type=”text” name=”User[lName]” value=”<%=model.lName %>”><br/>

<tr><td>DOB<td><input type=”text” name=”User[dob]” value=”<%=model.dob %>”><br/>

<tr><td>UserName<td><input type=”text” name=”User[userName]” value=”<%=model.userName %>”><br/>

<tr><td>Password<td><input type=”text” name=”User[password]” value=”<%=model.password %>”><br/>

<tr><td>Email<td><input type=”text” name=”User[email]” value=”<%=model.email %>”>
<tr ><td><td><input type=”submit” value=”SAVE”>
</form>
</code>
DELETE:

Step12: Add a delete/destroy action under api/controllers/UserController.js

<code>
destroy: function (req, res) {

var id=req.param(“id”,null);

User.findOne(id).done(function(err, usar) {

usar.destroy(function(err) {

res.redirect( ‘user/index/’);

// record has been removed
});

});

}
</code>
—You are done..Enjoy Coding with Sails.js
The code i’ve used in the above examples are perfectly working..

Facebook Authentication With Sails.js and Passport

Setting up multiple types of authentication on an application can be a daunting task. There are a ton of moving pieces and you want to make sure that your user model is up to the task of consuming any type of authentication you throw at it. To that end, I recently set up Facebook authentication on a Sails.js app I’m working on and I thought it might be worth sharing how easy it actually turned out to be.

Passport To The Rescue!

The good news is that if you are creating a Sails app, you can use Passport, which makes using multiple types of validation a ton easier.

Passport authenticates requests through plugins known as “strategies”. Almost any of the strategies you can imagine, such as Facebook, Twitter, Github, Username/Password, etc… are already written. It’s also really easy to write your own strategy if needed. Each strategy just takes a little bit of configuration and then gives you a callback when the login succeeds or fails.

The procedures goes here:-

Install Passport

You will need to install the passport npm module:

npm install passport

As well as the passport-facebook module:

npm install passport-facebook

We’ll come back to this, but we need to create a model to hold the data for the user first.

Create a User Model

To do authentication, we will need a user model to check our login against. You can generate your model from the command line using:

sails generate user

That will spit out a model in /api/models/User.js as well as a controller in /api/controllers/UserController.js.

The User model should look something like this:

/**
 * User
 *
 * @module      :: Model
 * @description :: A short summary of how this model works and what it represents.
 * @docs        :: http://sailsjs.org/#!documentation/models
 */

module.exports = {

  attributes: {

    /* e.g.
    nickname: 'string'
    */

  }

};

Now, we just need to update the attributes that are being exported to include the facebookId:

module.exports = {

  attributes: {

    facebookId: {
      type: 'string',
      required: true,
      unique: true
    }

  }

};

Update the User Controller

You will need to add a little logic to the user controller to handle authentication with Facebook. As I mentioned earlier, the user controller should be located at /api/controllers/UserController.js. Some of the routes and request scope may need to be changed based on your needs, but this is essentially how I got it working:

var passport = require('passport');

module.exports = {

  login: function (req, res) {
    res.view();
  },

  dashboard: function (req, res) {
    res.view();
  },

  logout: function (req, res){
    req.session.user = null;
    req.session.flash = 'You have logged out';
    res.redirect('user/login');
  },

  'facebook': function (req, res, next) {
     passport.authenticate('facebook', { scope: ['email', 'user_about_me']},
        function (err, user) {
            req.logIn(user, function (err) {
            if(err) {
                req.session.flash = 'There was an error';
                res.redirect('user/login');
            } else {
                req.session.user = user;
                res.redirect('/user/dashboard');
            }
        });
    })(req, res, next);
  },

  'facebook/callback': function (req, res, next) {
     passport.authenticate('facebook',
        function (req, res) {
            res.redirect('/user/dashboard');
        })(req, res, next);
  }

};

Create the Passport Service

This is probably the biggest addition you will need to make to get your app accepting Facebook logins. This should all go in api/services/passport.js.

Remember to replace the clientId and ClientSecret with your own Facebook app credentials. If you don’t have them yet, you can create them at developers.facebook.com

var passport = require('passport'),
  FacebookStrategy = require('passport-facebook').Strategy;

function findById(id, fn) {
  User.findOne(id).done(function (err, user) {
    if (err) {
      return fn(null, null);
    } else {
      return fn(null, user);
    }
  });
}

function findByFacebookId(id, fn) {
  User.findOne({
    facebookId: id
  }).done(function (err, user) {
    if (err) {
      return fn(null, null);
    } else {
      return fn(null, user);
    }
  });
}

passport.serializeUser(function (user, done) {
  done(null, user.id);
});

passport.deserializeUser(function (id, done) {
  findById(id, function (err, user) {
    done(err, user);
  });
});

passport.use(new FacebookStrategy({
    clientID: "YOUR-FACEBOOK-CLIENT-ID",
    clientSecret: "YOUR-FACEBOOK-CLIENT-SECRET",
    callbackURL: "http://localhost:1337/user/facebook/callback",
    enableProof: false
  }, function (accessToken, refreshToken, profile, done) {

    findByFacebookId(profile.id, function (err, user) {

      // Create a new User if it doesn't exist yet
      if (!user) {
        User.create({

          facebookId: profile.id

          // You can also add any other data you are getting back from Facebook here 
          // as long as it is in your model

        }).done(function (err, user) {
          if (user) {
            return done(null, user, {
              message: 'Logged In Successfully'
            });
          } else {
            return done(err, null, {
              message: 'There was an error logging you in with Facebook'
            });
          }
        });

      // If there is already a user, return it
      } else {
        return done(null, user, {
          message: 'Logged In Successfully'
        });
      }
    });
  }
));

Add the Express Middleware

You will need to add a little bit of middleware in order to use Passport with Express. I just created a file called /config/express.js with this logic:

var passport = require('passport');

module.exports.express = {
    customMiddleware: function(app){

        // Passport
        app.use(passport.initialize());
        app.use(passport.session());

        app.use(function(req, res, next){
            res.locals.user = req.session.user;
            next();
        });

    }
};

All it’s really doing is initializing passport, using the passport session and adding the user to the local scope of any views that are being rendered.

Create a Login View

Create a view in views/user/login.ejs The bare bones of the login view should look like this:

<a href="/user/facebook">Login With Facebook</a>

Let’s also throw up a dashboard view in views/user/dashboard.ejs:

<p>You are logged in! Your id is <strong><%- user.id %></strong> and your Facebook Id is <strong><%- user.facebookId %></strong></p>
<p><a href="/user/logout">Logout</a></p>

Okay then. Now, you can test this whole thing by running sails lift in your command line and going to http://localhost:1337/user/login. When you click on the “Login With Facebook” it should authenticate and then take you to the dashboard page. If you log out and try to go to the dashboard page again without logging back in, it will actually throw an error because there is no user in the session. Not the best experience, but it shows that the user has successfully logged out and it’s fairly trivial to check for the user in the session and redirect if they don’t exist.

So there you have it. Facebook authentication with Sails.js. Enjoy!

Introduction to Sails.js

Sails is a Javascript framework designed to resemble the MVC architecture from frameworks like Ruby on Rails. It makes the process of building Node.js apps easier, especially APIs, single page apps and realtime features, like chat.

Installation

To install Sails, it is quite simple. The prerequisites are to have Node.js installed and also npm, which comes with Node. Then one must issue the following command in the terminal:

sudo npm install sails -g

Create a New Project

In order to create a new Sails project, the following command is used:

sails new myNewProject

Sails will generate a new folder named myNewProject and add all the necessary files to have a basic application built. To see what was generated, just get into the myNewProject folder and run the Sails server by issuing the following command in the terminal:

sails lift

Sails’s default port is 1337, so if you visit http://localhost:1337 you should get the Sails default index.html page.

index.html

Now, let’s have a look at what Sails generated for us. In our myNewProject folder the following files and sub-folders were created:

Folder structure

The assets Folder

The assets folder contains subdirectories for the Javascript and CSS files that should be loaded during runtime. This is the best place to store auxiliary libraries used by your application.

The public Folder

Contains the files that are publicly available, such as pictures your site uses, the favicon, etc.

The config Folder

This is one of the important folders. Sails is designed to be flexible. It assumes some standard conventions, but it also allows the developer to change the way Sails configures the created app to fit the project’s needs. The following is a list of configuration files present in the config folder:

  • adapters.js – used to configure the database adapters
  • application.js – general settings for the application
  • assets.js – asset settings for CSS and JS
  • bootstrap.js – code that will be ran before the app launches
  • locales – folder containing translations
  • policies.js – user rights management configuration
  • routes.js – the routes for the system
  • views.js – view related settings

The sails.js documentation contains detailed information on each of these folders.

The views Folder

The application’s views are stored in this folder. Looking at its contents, we notice that the views are generated by default as EJS (embedded JavaScript). Also, the views folder contains views for error handling (404 and 500) and also the layout file (layout.ejs) and the views for the home controller, which were generated by Sails.

The api Folder

This folder is composed from a buch of sub-folders:

  • the adapters folder contains the adapters used by the application to
    handle database connections
  • the controllers folder contains the application controllers
  • the application’s models are stored in the models folder
  • in the policies folder are stored rules for application user access
  • the api services implemented by the app are stored in the services
    folder

Configure the Application

So far we have created our application and took a look at what was generated by default, now it’s time to configure the application to make it fit our needs.

General Settings

General settings are stored in the config/application.js file. The configurable options for the application are:

  • application name (appName)
  • the port on which the app will listen (port)
  • the application environment; can be either development or production (environment)
  • the level for the logger, usable to control the size of the log file (log)

Note that by setting the app environment to production, makes Sails bundle and minify the CSS and JS, which can make it harder to debug.

Routes

Application routes are defined in the config/routes.js file. As you’d expect, this file will be the one that you will most often work with as you add new controllers to the application.

The routes are exported as follows, in the configuration file:

module.exports.routes = {
  // route to index page of the home controller
  '/': {
    controller: 'home'
  },

  // route to the auth controller, login action
  '/login': {
    controller: 'auth',
    action: 'login'
  },

  // route to blog controller, add action to add a post to a blog
  // note that we use also the HTTP method/verb before the path
  'post /blog/add': {
    controller: 'blog',
    action: 'add_post'
  },

  // route to get the first blog post. The find action will return
  // the database row containing the desired information
  '/blog/:item': {
    controller: blog,
    action: find
  }
}

Views

Regarding views, the configurable options are the template engine to be used and if a layout should or not be used, for views.

Models

Models are a representation of the application data stored in a database. Models are defined by using attributes and associations. For instance, the definition of a Person model might look like this:

// Person.js
var Person = {
  name: 'STRING',
  age: 'INTEGER',
  birthDate: 'DATE',
  phoneNumber: 'STRING',
  emailAddress: 'STRING'
};
exports = Person;

The communication with the underlying database is done through adapters. Adapters are defined in api/adapters and are configured in the adapters.js file. At the moment of writing this article, Sails comes with three adapters: memory, disk and mysql but you can write your own adapter (see the documentation for details).

Once you have a model defined you can operate on it by creating records, finding records, updating and destroying records.

Controllers

Controllers are placed in api/controllers. A controller is created using the following command:

sails generate controller comment

This command will generate a CommentController object. Actions are defined inside this object. Actions can also be generated when you issue the generate controller command:

sails generate controller comment create destroy tag like

This will create a Comment controller with actions for create, destroy, tag and like.

Actions receive as parameters the request and the response objects, which can be used for getting parameters of the URI (the request object) or output in the view (using the response object).

To communicate with the model, the callback of the appopriate action is used. For instance, in the case of querying a database with find, the following pattern is used to manipulate the model:

Blog.find(id).done(err, blog) {
  // blog is the database record with the specified id
  console.log(blog.content);
}

Views

Views are used to handle the UI of the application. By default, views are handled using EJS, but any other templating library can be used. How to configure views was discussed previously in the Configuration chapter.

Views are defined in the /views directory and the templates are defined in the /assests/templates folder.

There are mainly four types of views:

  • server-side views
  • view partials
  • layout views
  • client-side views

Server-Side Views

Their job is to display data when a view is requested by the client. Usually the method res.view corresponds to a client with the appropriate view. But if no controller or action exists for a request, Sails will serve the view in the following fashion: /views/:controller/:action.ejs.

The Layout View

The Layout can be found in /views/layout.ejs. It is used to load the application assets such as stylesheets or JavaScript libraries.

Have a look at the specified file:

<!DOCTYPE html>
<html>
  <head>
    <title><%- title %></title>

    <!-- Viewport mobile tag for sensible mobile support -->
    <meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1, maximum-scale=1&quot;>

    <!-- JavaScript and stylesheets from your public folder are included here -->
    <%- assets.css() %>
    <%- assets.js() %>
  </head>

  <body>
    <%- body %>

    <!-- Templates from your view path are included here -->
    <%- assets.templateLibrary() %>
  </body>
</html>

The lines assets.css() and assets.js() load the CSS and JS assets of our application and the assets.templateLibrary loads the client templates.

Client-Side Templates

These are defined in the /assets/templates and are loaded as we saw above.

Routes

We discussed how to configure routes in the Configuration chapter.

There are several conventions that Sails follows when routes are handled:

  • if the URL is not specified in the config/routes.js the default route for a URL is /:controller/:action/:id with the obvious meanings for controller and action and id being the request parameter derived from the URL.
  • if :action is not specified, Sails will redirect to the appropriate action. Out of the box, the same RESTful route conventions are used as in Backbone.
  • if the requested controller/action do not exist, Sails will behave as so:
    • if a view exists, Sails will render that view
    • if a view does not exist, but a model exists, Sails will return the JSON form of that model
    • if none of the above exist, Sails will respond with a 404

Conclusion

Now I’ve barely scratched the surface with what Sails can do, but stay tuned, as I will follow this up with an in-depth presentation showing you how to build an application, using Sails.

Also keep in mind that Sails is currently under development and constantly changing. So make sure to check out the documentation to see what’s new.

sailsCasts Answers: Ep7: How Do I Create a Restful Json CRUD Api in Sails From Scratch?

sep 11, 2014

The repo for this project can be found here: https://github.com/irlnathan/sails-crud-api

Transcript

Howdy and welcome back to part II of our three part series. In the last episode we learned how the http request/response protocol works with routes, controllers, actions and models to deliver a restful json CRUD api. In this episode we’ll take the concepts we learned and use them to build the api from scratch. In the final episode we’ll explore how sail’s blueprints: actions and routes can be used to create that same restful json CRUD api automatically for any of your controllers and models.

Let’s review what we’re trying to accomplish.

Our api will be used to access and update information that tracks our sleep patterns including how much we sleep each night and the quality of that sleep.

So we want the api to be able to respond to requests to find, create, update or delete instances of our sleep model. We’ll create actions that corresond to the requests and then build up routes that match the appropriate http verbs and paths with the corresponding controller and action.

  • So the find request will use the http verb get with the path /sleep/:id? and bind to the sleep controller and find action.
  • The create request will use the verb post with the path /sleep and bind to the sleep controller and the create action.
  • The update request will use the verb put with the path /sleep/:id? and bind to the sleep controller and the update action.
  • and finally, the delete request will use the verb delete with the path /sleep/:id? and bind to the sleep controller and destroy action.

The actions will then use the model methods to find, create, update or destroy the model as requested and use the parameters hours_slept and sleep_quality to pass any necessary information within the request through the action to the model. The action will then respond with the request status as well as any model instance or instances required.

So let’s get started. I’m going to bring up a terminal window and we’re going to create a new sails project called mySleep using sails new mySleep --linker. and I’ll change into the mySleep folder and generate a sleep controller and model using sails generate sleep.

So, here’s a roadmap of what we’re going to build. I’m going to start with the create action, building the action and then building the route that will bind the find request with the sleep controller and find action. I’m going to go through each action, create it, and then build the matching route that will bind our request to the controller and action. So let’s start with the create action.

I’ll open my sleep controller found in /api/controllers/SleepController.js and create my first action called create:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
// a CREATE action  
create: function(req, res, next) {

    var params = req.params.all();

    Sleep.create(params, function(err, sleep) {

        if (err) return next(err);

        res.status(201);

        res.json(sleep);

    });
}

The action is straightforward, we’re going to grab the request’s parameters in the var params and then pass params into the create method of our sleep model. If there’s an error we’ll return it and if not I’ll send a 201 status code response with the newly created model instance formatted as json.

So that’s the create action, now I need to create a route that will bind this controller and action to our request. So let’s open the routes in /config/routes.js and I’ll add my route after the existing home route:

1
2
3
4
5
6
7
8
9
10
module.exports.routes = {

  '/': {
    view: 'home/index'
  },

  // Custom CRUD Rest Routes
  'post /sleep': 'SleepController.create'

};

The route consists of the verb post to the path /sleep which is bound to the sleep controller and the create action. So let’s make sure our create action is working. I’ll go into the terminal, start sails with sails lift. I’ll again be using the POSTMAN chrome extension to test our requests. We’ll be using the http verb POST to the path /sleep adding two parameters hours_slept and sleep_quality. When I click send, Sails returns my newly created record as json.

1
2
3
4
5
6
7
{
    "hours_slept": "8",
    "sleep_quality": "good",
    "createdAt": "2013-12-10T21:31:00.442Z",
    "updatedAt": "2013-12-10T21:31:00.442Z",
    "id": 1
}

So let’s take a look at our api roadmap. We’ve built the create action as the first of the four actions of our api. Next, we’ll build the find action and then we’ll build a route that will bind Sleep controller and find action to our request. For the action let’s go back into the SleepController.js file and look at the find action code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
   // a FIND action
find: function (req, res, next) {

  var id = req.param('id');

  var idShortCut = isShortcut(id);

  if (idShortCut === true) {
      return next();
  }

  if (id) {

      Sleep.findOne(id, function(err, sleep) {

          if(sleep === undefined) return res.notFound();

          if (err) return next(err);

          res.json(sleep);

      });

  } else {

      var where = req.param('where');

      if (_.isString(where)) {
              where = JSON.parse(where);
      }

      // This allows you to put something like id=2 to work.
      // if (!where) {

      //     // Build monolithic parameter object
   //    params = req.params.all();

   //    params = _.omit(params, function (param, key) {

   //        return key === 'limit' || key === 'skip' || key === 'sort'

   //    });

      //   where = params;

      //   console.log("making it here!");

      // }

      var options = {
                  limit: req.param('limit') || undefined,
                  skip: req.param('skip')  || undefined,
                  sort: req.param('sort') || undefined,
                  where: where || undefined
          };

          console.log("This is the options", options);
              
      Sleep.find(options, function(err, sleep) {

          if(sleep === undefined) return res.notFound();

          if (err) return next(err);

          res.json(sleep);

      });

  }

  function isShortcut(id) {
      if (id === 'find'   ||  id === 'update' ||  id === 'create' ||  id === 'destroy') {
      return true;
      }
  }

},

Let’s also take a look at the route that will bind our request to the sleep controller and find action in /config/routes.js:

'get /sleep/:id?': 'SleepController.find'

The route points to our find action but look the end of the path, what’s up with :id?, and the question mark? The question mark makes the id parameter optional. That way we capture both the request 'get /sleep' as well as 'get /sleep/:id'.

The find action will be our most complex action of the four in our api. This is because we have to provide for a request finding a single instance of the model, multiple instances of the model, as well as using criteria and options to narrow and/or limit the scope of the find request.

So within our find action, we’ll attempt to assign a parameter called id to the var id. The next line of code looks to see if the id is a shortcut. I’m going to skip over this part because shortcuts are part of sail’s blueprints which we’ll be discussing in the third episode.

So if the id exists we’re going to assume that the request is looking for a particular model instance. We’ll pass the id to the findOne model method and if we don’t get back an instance of sleep in the callback, we’ll return or respond with a status code of 404not found. On success, we’ll return and respond with the model instance formatted as json.

Checking for multiple model instances. If no id is provided we’ll start looking for other criteria or options that may have been passed as a parameter for finding one or more model instances. Criteria is placed in a where clause which is just the key name for a criteria object. For example, if your want to find all model instances where sleep_quality = good, your parameters would look like this: ?where={sleep_quality: "good"}. We’ll also check for options that further limit the result in some way. For example, let’s say we only want the first 5 model instances of our result. The parameters would look like this: ?where={sleep_quality: "good"}&limit=5.

So if where exists as a parameter and the value for it is a string, we’ll just parse it as json and assign it to the var where. Even if where doesn’t exist we’ll still look for the keys limit, skip, and sort and place them within the options object. Finally, we’ll pass the options object to the Find model method and if we don’t get back an instance of sleep in the callback, we’ll return or respond with a status code of 404not found. On success, we’ll return and respond with the model instance(s) formatted as json.

So we have the find action complete, let’s make sure all of this works. I’ll head back to the terminal and restart the sails server using sails lift and then open a browser with the POSTMAN chrome extension. I’ve added a few more instances of our sleep model. Let’s take a look by sending a get request to the path /sleep. After sending the request the api returned five instances of the model:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
[
    {
        "hours_slept": "8",
        "sleep_quality": "good",
        "createdAt": "2014-01-09T23:36:01.552Z",
        "updatedAt": "2014-01-09T23:36:01.552Z",
        "id": 1
    },
    {
        "hours_slept": "12",
        "sleep_quality": "great",
        "createdAt": "2014-01-11T05:08:52.398Z",
        "updatedAt": "2014-01-11T05:08:52.399Z",
        "id": 2
    },
    {
        "hours_slept": "4",
        "sleep_quality": "poor",
        "createdAt": "2014-01-11T05:09:10.319Z",
        "updatedAt": "2014-01-11T05:09:10.319Z",
        "id": 3
    },
    {
        "hours_slept": "6",
        "sleep_quality": "so-so",
        "createdAt": "2014-01-11T05:09:20.456Z",
        "updatedAt": "2014-01-11T05:09:20.456Z",
        "id": 4
    },
    {
        "hours_slept": "10",
        "sleep_quality": "good",
        "createdAt": "2014-01-11T05:09:30.885Z",
        "updatedAt": "2014-01-11T05:09:30.885Z",
        "id": 5
    }
]

Since we didn’t provide an id or any criteria or options, the api used the find model method and returned all instances of the model formatted as json.

Next, let’s make a get request to the path /sleep/2. After pressing send, the api returns a single instance of the model with an id of 2:

1
2
3
4
5
6
7
{
    "hours_slept": "12",
    "sleep_quality": "great",
    "createdAt": "2014-01-11T05:08:52.398Z",
    "updatedAt": "2014-01-11T05:08:52.399Z",
    "id": 2
}

Now let’s try a request with some criteria. We’ll look for any model instances with an id greater than 1:

1
2
3
4
localhost:1337/sleep?where={
    "id": {
        ">":  1}
}

After making the request, the api returns four of the five model instances with id’s greater than 1.

Finally, I’m going to combine the criteria with some options. I’m going to make a get request to the path /sleep for model instances with an id not equal to 4, limited to 3 model instances and in descending order.

1
2
3
4
localhost:1337/sleep?where={
    "id": {
        "!":  4}
}&limit=3&sort=id desc

After making the request, the api returns three instances of the model in descending order.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[
    {
        "hours_slept": "10",
        "sleep_quality": "good",
        "createdAt": "2014-01-11T05:09:30.885Z",
        "updatedAt": "2014-01-11T05:09:30.885Z",
        "id": 5
    },
    {
        "hours_slept": "4",
        "sleep_quality": "poor",
        "createdAt": "2014-01-11T05:09:10.319Z",
        "updatedAt": "2014-01-11T05:09:10.319Z",
        "id": 3
    },
    {
        "hours_slept": "12",
        "sleep_quality": "great",
        "createdAt": "2014-01-11T05:08:52.398Z",
        "updatedAt": "2014-01-11T05:08:52.399Z",
        "id": 2
    }
]

Now that we know that our find action is battle tested, let’s go back to our api roadmap. By building the create action and route and the find action and route we’re half way through our api. Next, we’ll build the update action and then we’ll build a route that will bind the Sleep controller and update action to our request. Let’s head back into the SleepController.js file and look at the update action code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// an UPDATE action
    update: function (req, res, next) {

        var criteria = {};

        criteria = _.merge({}, req.params.all(), req.body);

        var id = req.param('id');

        if (!id) {
            return res.badRequest('No id provided.');
        }

        Sleep.update(id, criteria, function (err, sleep) {

            if(sleep.length === 0) return res.notFound();

            if (err) return next(err);

            res.json(sleep);

        });
    },

The update action consists of finding the id of the model instance to update coupled with the criteria that will be updated. If there’s no id as a parameter we respond with a 400 status— ‘No id provided’. Next we attempt to update the model instance using the id and criteria provided. If there’s an error we’ll return it and if not respond with the updated model instance formatted as json.

So now that we have the update action complete, we’ll bind that action to the request forming a new update route:

'put /sleep/:id?': 'SleepController.update'

The route points to our update action and uses the same :id? pattern that we used in the find route.

Let’s make sure all of this works. I’ll restart the sails server using sails lift and then open a browser with the POSTMAN chrome extension. I’m going to first make a put request to the path:

http://localhost:1337/sleep/3?added_attrib=12

After making the request, the api returns our instance of the model that has an id of 3 with our added attrib formatted as json.

Next, I’ll make a put request to:

1
2
3
4
http://localhost:1337/sleep/3
{
  "added_3": 42
}

…but instead of using query parameters, I’ll pass the update via the request body. After making the request, the api returns our instance of the model that has an id of 3 with our added_3 attribute formatted as json.

1
2
3
4
5
6
7
8
  // Custom Action Route
  'get /sleep/new': 'SleepController.new',

  // Custom CRUD Rest Routes
  'get /sleep/:id?': 'SleepController.find',
  'post /sleep': 'SleepController.create',
  'put /sleep/:id?': 'SleepController.update',
  'delete /sleep/:id?': 'SleepController.destroy',

Now that update action and route is complete it’s time to build the last action of our api the destroy action and then bind it to our request to form the delete route. Let’s head back into the SleepController.js file and look at the destroy action code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
// a DESTROY action
    destroy: function (req, res, next) {

        var id = req.param('id');

        if (!id) {
            return res.badRequest('No id provided.');
        }

        Sleep.findOne(id).done(function(err, result) {
            if (err) return res.serverError(err);

            if (!result) return res.notFound();

            Sleep.destroy(id, function (err) {

                if (err) return next (err);

                return res.json(result);
            });

        });
    },

So we’ll attempt to assign the id param to a var called id. If it doesn’t exist I’ll return a 400— ‘No id provided’. If an id parameter was provided in the request, I’ll attempt to find it in the sleep model. If the model doesn’t exist I’ll respond with a status code of 404not found. If the mode instance does exist, I’ll pass the id to the destroy method of the model returning either an error if any, or the deleted model instance formatted as json.

Next I’ll bind the destroy action with the request in its own delete route:

'delete /sleep/:id?': 'SleepController.destroy'

Let’s check it out by restarting the sails server using sails lift. Once again within the POSTMAN chrome extension I’ll make a delete request to the path:

destroy http://localhost:1337/sleep/5

After sending the request the api responds with the model instance it just deleted formatted as json.

Congratulations, you’ve built a restful json CRUD api. Any client-side device that supports http requests can now hit our api’s endpoints and request and submit information about our sleep model.

In the next and final episode of this series I’ll show you how sail’s blueprints: actions and routes can be used to create this same restful json CRUD api we just created, automatically for any of your controllers and models.

DOM Access Control Using Cross-Origin Resource Sharing

Introduction

Same-origin policies are a central security concept of modern browsers. In a web context, they prevent a script hosted at one origin — meaning the same protocol, domain name, and port — from reading from or writing to the DOM of another.

This restriction is sensible and useful most of the time. Without a same-origin policy, a script hosted on http://foo.example could hijack cookie data or sensitive document information from http://bar.example and redirect it to http://evilsite.example.

Sometimes, however, a same-origin policy can be burdensome. Making requests across subdomains, for example, is prohibited by a same-origin policy. You also can’t use XMLHttpRequest to pull in JSON data from a third-party API. To make matters worse, workarounds such as JSONP or document.domain can leave us vulnerable to XSS attacks.

What we need, then, is a mechanism for requesting data across origins, but with the ability to deny requests that don’t come from the right source. This is the problem that Cross-Origin Resource Sharing (or CORS) solves.

Cross-Origin Resource Sharing is new in Opera 12. Support is also available in Chrome, Safari, Firefox, and the forthcoming Internet Explorer 10.

What is CORS?

CORS is a system of headers and rules that allow browsers and servers to communicate whether or not a given origin is allowed access to a resource stored on another. Understanding CORS is critical to working with modern web APIs. Cross-domain XMLHttpRequest, and Internet Explorer’s XDomainRequest object, for example, both rely on it.

CORS consists of three request headers, and six response headers (see Table 1 below). Browsers automatically set request headers for some cross-origin requests, such as those made using the XMLHttpRequest object.

Figure 1: A table of cross-origin resource sharing headers
Request headers Response headers
Origin: Lets the target host know that the request is coming from an external source, and what that source is. Access-Control-Allow-Origin: Lets the referer know whether it is allowed to use the target resource.
Access-Control-Request-Method: Included when the HTTP method used is one that may cause a side-effect (such as PUT or DELETE). Access-Control-Allow-Methods: Lets the referer know what HTTP methods are allowed, i.e. if the one(s) specified in Access-Control-Request-Method are okay.
Access-Control-Request-Headers: Included when the header is a complex header, such as If-Modified-Since, or a custom header such as Opera Mini’s X-Forwarded-For. Access-Control-Allow-Headers: Lets the referer know if the headers it sent are okay.
  Access-Control-Max-Age: Explicitly informs the referer how many seconds it should store the preflight result. Within this time, it can just send the request, and doesn’t need to bother sending the preflight request again.
  Access-Control-Allow-Credentials: This tells the host whether the request can include user credentials.
  Access-Control-Expose-Headers: Lets the host know exactly which headers it can expose to the referring application. A header white-list.

Response headers, of course, are returned by the URI in question. You can set them in your server configuration file or per URI using a server-side language. Which approach you choose will depend on the kind of application you’re building. We’ll cover each response header in the Sending CORS Response Headers section.

Though cross-origin resource sharing is a permissions system of sorts, understand that it is not a form of content protection: it is a form of cross-site scripting protection. Browsers will still complete the HTTP request, but will expose the resulting response body only if the response includes the appropriate headers. You will experience this if you run the CORS demos.

Speaking of running demos, I recommend using an HTTP monitor to observe headers, as built-in developer tools can sometimes mask what’s happening under the hood. A good open source choice is Wireshark, and pay-for alternatives include Charles (Mac/Win/Linux; US$50 / ~€38) and HTTPScoop (Mac; €12 / ~US$15)

How browsers make simple cross-origin requests

When a script attempts a cross-origin request, the user agent will automatically include one or more request headers, depending on how the request is formed. If the server or application sends the appropriate response headers, subsequent attempted changes to the DOM will succeed.

Here’s an example. The code below uses XMLHttpRequest to retrieve a JSON-formatted file from http://foo.example. We’ll assume that this script is hosted on http://bar.example.

var xhr = new XMLHttpRequest();
xhr.onload = function(e){
  // Build a list and append it to the document's body.
}
xhr.open('GET', 'http://foo.example/data.json');
xhr.send( null );

Now let’s look at how Opera and other browsers handle this cross-origin request. What follows is an example of Opera’s request headers.

GET /data.json HTTP/1.1
User-Agent: Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.10.238 Version/12.00
Host: foo.example
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1
Accept-Language: en, en-US
Accept-Encoding: gzip, deflate
Referer: http://bar.example/document_making_the_request.html
Connection: Keep-Alive
Origin: http://bar.example

See that Origin header? It lets http://foo.example/data.json know that this request is coming from an external source. Notice too that the Referer and Origin headers have different values, and that the value of Origin does not include a trailing slash.

Now let’s look at the URI’s response headers.

HTTP/1.1 200 OK
Date: Tue, 04 Oct 2011 00:18:35 GMT
Server: Apache/2.2.20
Cache-Control: max-age=0
Expires: Tue, 04 Oct 2011 00:18:35 GMT
Vary: Accept-Encoding
Content-Type: application/json
Access-Control-Allow-Origin: http://bar.example

Here we have an Access-Control-Allow-Origin response header. That header indicates whether or not http://bar.example is allowed to use this resource. Because the value of Access-Control-Allow-Origin matches http://bar.example, subsequent DOM operations requiring data.json will succeed (as you can see in my CORS example). If the Access-Control-Allow-Origin value did not match, or the header was missing, then the contents of data.json would not be made available to the DOM. We’ll discuss the Access-Control-Allow-Origin header in greater detail below.

How browsers make complex cross-origin requests

For simple request methods (GET, HEAD and POST), and simple request headers (Accept, Accept-Language, Content-Language, Last-Event-ID, or Content-Type) the exchange between the Origin header and the Access-Control-Allow-Origin header is enough.

Complex request methods and request headers (including custom headers) work a bit differently. They require that the cross-origin request be pre-approved using a preflight request.

A preflight request asks the target server whether it is okay to make a full request using a particular method or header. In a typical cross-origin request, the user agent says to the server, Hi there! It’s http://foo.example. Please send me this resource. In a preflight request, the user agent will start off by saying, Hey, hey! It’s http://foo.example. I am going to ask for this resource using the PUT method. I also plan to include an If-Modified-Since header. Will you tell me whether you can handle this method and header before I send the actual request?

During a preflight operation, the user agent first sends a request using the OPTIONS method. In addition to the Origin header, the preflight request will include the Access-Control-Request-Method and/or an Access-Control-Request-Headers header.

Access-Control-Request-Method is included when the HTTP method used is one that may have a side effect — using PUT or DELETE, for example. Browsers also send Access-Control-Request-Headers when the header is a complex header, such as If-Modified-Since, or a custom header such as Opera Mini’s X-Forwarded-For.

Let’s look at an example using the PUT method. This request will be made from servera.example to serverb.example using XMLHttpRequest.

var xhr = new XMLHttpRequest() ;
xhr.open('PUT', 'http://serverb.example/formhandler/');
xhr.send('data=some+data');

Now let’s look at the preflight request headers.

OPTIONS /formhandler/ HTTP/1.1
User-Agent: Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.10.238 Version/12.00
Host: serverb.example
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1
Accept-Language: en, en-US
Accept-Encoding: gzip, deflate
Referer: http://servera.example/make_cross_origin_request
Connection: Keep-Alive
Content-Length: 0
Origin: http://servera.example
Access-Control-Request-Method: PUT

The URI returns a standard set of response headers. But it also includes the Access-Control-Allow-Origin and Access-Control-Allow-Methods headers.

HTTP/1.1 200 OK
Date: Tue, 06 Dec 2011 23:28:16 GMT
Server: Apache/2.2.21
Access-Control-Allow-Origin: http://servera.example
Access-Control-Allow-Methods: PUT
Cache-Control: max-age=0
Expires: Tue, 06 Dec 2011 23:28:16 GMT
Vary: Accept-Encoding
Content-Encoding: gzip
Content-Length: 134
Content-Type: text/html; charset=UTF-8

Here the values of Access-Control-Allow-Origin and Access-Control-Allow-Methods match the values of Origin and Access-Control-Request-Method, respectively. As a result, this preflight request will be followed by an actual request that includes the request body (in this case, data=some+data).

We will cover Access-Control-Allow-Methods and a similar header, Access-Control-Allow-Headers, in the Sending CORS response headers section. For now, it’s enough to understand that if either of these headers were missing or contained values that did not match, the browser would cancel the actual request.

Sending CORS response headers

Scripts can initiate cross-origin requests, but the target URI must permit fetching by sending the appropriate response headers. Let’s look at each possible response header.

Access-Control-Allow-Origin

As its name suggests, the Access-Control-Allow-Origin header is a response to the Origin request header. It tells the user agent whether the requesting origin has permission to fetch the resource.

Access-Control-Allow-Origin can be set to one of three values:

  • null, which denies all origins;
  • *, the wildcard operator, which allows all origins; or
  • An origin list of one or more space-separated origins.

The following examples are all valid headers.

In practice, however, origin lists (Access-Control-Allow-Origin: http://foo.example http://bar.example) do not yet work in any browser. Instead, servers and applications must return an Access-Control-Allow-Origin header conditionally, based on the value of the Origin request header. An example of how to do this follows, in the Conditional CORS implementation section.

Also keep in mind that, though it is possible to use a wildcard value, it isn’t necessarily a good idea. Doing so will allow scripts from any origin access to your document tree. It is safest to limit access to origins you know, and authenticate requests for sensitive data.

Access-Control-Allow-Methods

If a preflight request contains an Access-Control-Request-Method header, the target URI must return an Access-Control-Allow-Methods header for the request to be completed successfully. The header’s value must be one or more HTTP methods such as PUT, DELETE, TRACE or CONNECT (again, GET, POST, and HEAD are considered simple methods, and will not cause this header to be included).

It’s perfectly valid to allow multiple methods. However, you must separate them with a comma: Access-Control-Allow-Methods: PUT, DELETE.

Access-Control-Allow-Headers

Access-Control-Allow-Headers has a similar function to Access-Control-Allow-Methods, but instead tells the browser whether a particular header is allowed.

Standard or custom headers are appropriate values for Access-Control-Allow-Headers. For the cross-origin request to succeed, its value must match (or include) the value of the Access-Control-Request-Headers header. Let’s look at an example.

OPTIONS /data.json HTTP/1.1
User-Agent: Opera/9.80 Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.10.238 Version/12.00
Host: domain.example
Accept: text/html, application/xml;q=0.9, application/xhtml+xml, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1
Accept-Language: en, en-US
Accept-Encoding: gzip, deflate
Referer: http://requestingserver.example/path/to/document_making_the_request/
Connection: Keep-Alive
Origin: http://requestingserver.example
Access-Control-Request-Headers: X-Secret-Request-Header

The response headers might look like this:

HTTP/1.1 200 OK
Date: Tue, 04 Oct 2011 00:18:35 GMT
Server: Apache/2.2.20
Cache-Control: max-age=0
Expires: Tue, 04 Oct 2011 00:18:35 GMT
Vary: Accept-Encoding
Content-Type: text/html; charset=UTF-8
Access-Control-Allow-Origin: http://requestingserver.example
Access-Control-Allow-Headers: X-Secret-Request-Header, X-Forwarded-For

In this case, the request succeeds. If the value of Access-Control-Allow-Headers was X-Not-A-Secret instead, or missing entirely, this would have failed. As with Access-Control-Allow-Methods, multiple header values must be separated by a comma.

Access-Control-Allow-Credentials

Cross-origin requests do not include cookies or HTTP authentication information by default; they can, however, if the credentials flag is set to true. In the case of XMLHttpRequest, the credentials flag can be set using the withCredentials property. Below is an example of such a request. If a user cookie is available, it will be sent to the server.

xhr = new XMLHttpRequest();
xhr.open('GET','/page_requiring_authentication/');
xhr.withCredentials = true;
xhr.send( null );

Setting Access-Control-Allow-Credentials tells the user agent whether the response should be exposed when the credentials flag is true. If sent in response to a preflight request, it indicates that the actual request can include user credentials. In these cases, the Access-Control-Allow-Origin header must match the origin in order for the request to succeed; a wild card value will not work. Again, if the header is missing entirely, the request will fail (view my Access-Control-Allow-Credentials demo).

Access-Control-Expose-Headers

Browsers, by default, limit which cross-origin response headers are available to the DOM. Using the XMLHttpRequest.getResponseHeader() to read the Content-Length header will result in a null value. You may however want your application to know how many bytes of content to expect. Access-Control-Expose-Headers is designed to let developers white-list headers that can safely be exposed to the requesting origin.

Unfortunately, Access-Control-Expose-Headers does not yet work as you might expect in some browsers. To date:

  • Opera and Firefox will permit both standard HTTP headers and custom headers to be exposed.
  • Chrome and Safari will not expose headers it deems unsafe, including custom headers.
  • Internet Explorer will expose custom headers, but not standard ones that it deems unsafe.

Content-Length, for example, can be exposed in Firefox and Opera, but not Internet Explorer, Chrome, or Safari. A custom header such as X-Secret-Request-Header can be exposed in Opera, Internet Explorer, and Firefox, but not Chrome or Safari. To see this for yourself, compare how my Access-Control-Expose-Headers demo works in different browsers.

Access-Control-Max-Age

When a user agent makes a preflight request, the result is stored in the preflight result cache. The default expiration varies from browser to browser, but cross-origin requests made after the result cache expires will be preceded by another preflight request.

Access-Control-Max-Age explicitly informs the user agent how many seconds it should store the preflight result (try viewing my Access-Control-Max-Age demo). Access-Control-Max-Age: 15, for instance, tells the browser If you make another request in the next fifteen seconds, you can skip the preflight process. Just send the request. Setting Access-Control-Max-Age to zero (Access-Control-Max-Age: 0) disables the preflight result cache.

How to Set Response Headers

The easiest way to enable cross-origin resource sharing is to set response headers per file type or directory using a server configuration file. The example that follows is specific to Apache, and requires mod_headers. To permit requests for all JSON files from http://foo.example, your .htaccess file should contain the following.

<IfModule mod_headers.c>
  <FilesMatch "\.json$">
      Header set Access-Control-Allow-Origin "http://foo.example"
  </FilesMatch>
<IfModule>

If you use another web server, consult its documentation for instructions.

Setting CORS headers in the server configuration is adequate in some situations, although in most cases you’ll want to set access control response headers per URI. This should be done at the application level using a server-side language of your choice.

Conditional CORS

As discussed above, no major browser yet supports multiple origins as a value for the Access-Control-Allow-Origin header. So what do you do if you want to share data across several origins? The solution is to set the value conditionally.

The simple example that follows uses PHP to send an Access-Control-Allow-Origin response header only if the supplied origin is in our white list ($allowed):

<?php
# First check whether the Origin header exists
if( in_array('HTTP_ORIGIN', $_SERVER) ) {
  # Define a list of permitted origins
  $allowed = array('http://foo.example','http://bar.example','http://dom.example');

  # Check whether our origin is permitted.
  if(in_array($_SERVER['HTTP_ORIGIN'], $allowed) ){
    $filtered_url = filter_input(INPUT_SERVER, 'HTTP_ORIGIN', FILTER_SANITIZE_URL);
    $send_header  = 'Access-Control-Allow-Origin: '.$filtered_url;
    header($send_header);
    // Send your content here.
  }
} else {
  exit;
}
?>

A more robust version of the above example might keep a list of allowed origins for each URI in a datastore. Again, this is not an effective way to protect sensitive data. But it is a bullwark against XSS attacks.

Learn More

For a greater understanding of cross-domain scripting and cross-origin resource sharing, visit the resources below.

How to install node.js?

n this post we detail how to install node on Mac, Ubuntu, and Windows.
Mac

If you’re using the excellent homebrew package manager, you can install node with one command: brew install node.

Otherwise, follow the below steps:

Install Xcode.
Install git.
Run the following commands:

darwin_setup.sh

git clone git://github.com/ry/node.git
cd node
./configure
make
sudo make install

You can check it worked with a simple Hello, World! example.
Ubuntu

Install the dependencies:
sudo apt-get install g++ curl libssl-dev apache2-utils
sudo apt-get install git-core

Run the following commands:

ubuntu_setup.sh

git clone git://github.com/ry/node.git
cd node
./configure
make
sudo make install

You can check it worked with a simple Hello, World! example.

Thanks to code-diesel for the Ubuntu dependencies.
Windows

Currently, you must use cygwin to install node. To do so, follow these steps:

Install cygwin.

Use setup.exe in the cygwin folder to install the following packages:
devel → openssl
devel → g++-gcc
devel → make
python → python
devel → git

Open the cygwin command line with Start > Cygwin > Cygwin Bash Shell.
Run the below commands to download and build node.

cygwin_setup.sh

git clone git://github.com/ry/node.git
cd node
./configure
make
sudo make install

For more details, including information on troubleshooting, please see the GitHub wiki page.
Hello Node.js!

Here’s a quick program to make sure everything is up and running correctly:
hello_node.js

var http = require(‘http’);
http.createServer(function (req, res) {
res.writeHead(200, {‘Content-Type’: ‘text/plain’});
res.end(‘Hello Node.js\n’);
}).listen(8124, “127.0.0.1”);
console.log(‘Server running at http://127.0.0.1:8124/&#8217;);

Run the code with the node command line utility:

> node hello_node.js
Server running at http://127.0.0.1:8124/

Now, if you navigate to http://127.0.0.1:8124/ in your browser, you should see a nice message.
Congrats!

You’ve installed node.js.

AngularJS Tutorial: Learn to Build Modern Web Apps

Introduction

This tutorial will guide you through the process of creating a full-stack application. It features step-by-step instructions on how to build a fantasy football application, code snippets of the full application, and explanations on design decisions.

We have written a refresh of this tutorial to be a frontend-only implementation using Firebase. The new version of the tutorial can be found here.

Our intention is to provide the AngularJS community with instructions on how to use AngularJS correctly and effectively, but also in its most modern form. The application you are building will go beyond basic use of AngularJS, and we will attempt to explore as much of the framework as possible. We also feel strongly about maintaining modernity in a tutorial, so we will keep it congruent with AngularJS as the framework and community matures. This tutorial is built on top of AngularJS v1.2.0rc1.

The tutorial is a living thing, a work in progress. We are constantly extending the tutorial and making changes and corrections. If you find errata, think something should be changed, or would like to suggest an improvement or new section, we would love to hear from you.
Source code and PDF eBook

This tutorial is provided to you free of charge on the site. We built this in the interest of advancing the AngularJS framework and community.

We encourage you to purchase the source code and PDF of this tutorial to help fund our continued efforts in building more material and features for Thinkster. Your money literally goes towards paying our rent and food for the next few months while we make the tutorial even more awesome!

Download the source code and PDF eBook here, securely through Gumroad.

All users who purchase the tutorial will have free access to new versions of the up-to-date source code and PDF as they are released.

You have 100% ownership over the PDF and source code and can use them however you like.
Prerequisites

This tutorial assumes you already have a working knowledge of AngularJS. Throughout, there will be references to parts of the “A Better Way to Learn AngularJS” curriculum if you need to clarify or refresh on a certain subject. We recommend going through the entire curriculum before beginning this tutorial.

A knowledge of MongoDB, NodeJS, and ExpressJS will be of great assistance, but is not required.
The Stack

This application will be built on top of the MEAN stack – MongoDB, ExpressJS, AngularJS, and NodeJS. The wonderful people at http://www.mean.io/ have written a boilerplate application stack. We took the stack and stripped it down to a more basic form, which is the point from which you will start.

AngularJS can just as easily be used with a Ruby on Rails, Django, CakePHP, or any other server-side framework. Similarly, you could substitute MongoDB for any other database to use with AngularJS. We chose the MEAN stack for the tutorial because it offers an extremely clean implementation of the application, but by no means is it the only implementation.

We included a backend component to the tutorial because it is impossible to create a truly awesome application with only AngularJS and other JS libraries – you need a server and database component. Thus, the tutorial will go through the construction of an entire application, both frontend and backend.

Part of this tutorial will be spent demystifying why the MEAN stack works the way it does. This is essential to a complete understanding of how to build an application with AngularJS.
Why fantasy football for a tutorial?

Besides the fact that fantasy football is totally awesome and a ton of fun?

We are bored to tears by building Twitter clones over and over again in tutorials, and wanted to mix it up a bit. Building a fantasy football application is challenging, and can be broken down into pieces, so it fits into the tutorial paradigm well.
What is fantasy football?

Fantasy football is centered around creating a “fantasy” team out of real life NFL players, and pitting your fantasy team against other teams. Groups of users, usually 8-12, will create their own teams as part of a “league”, and will compete against other individuals within that league.

Individuals in a fantasy football league will choose real players in the American National Football League (NFL) for their teams, and these teams will face each other weekly in a one-on-one matchup during actual NFL games. Players in the real games perform actions that score points in fantasy football, and whichever fantasy team scores more points in that matchup wins. Teams with the best win/loss records enter fantasy playoffs, and an eventual champion is selected.
Makeup of a fantasy team

There are 32 NFL teams, and your fantasy team will consist of players from some of these teams.

There are lots of different positions on a NFL team, but for fantasy football purposes, we simplify the different positions greatly: all you have to worry about is Quarterback (QB), Runningback (RB), Wide Receiver (WR), Tight End (TE), Kicker (K), and Defense/Special Teams (D/ST). Defense/Special Teams is a special position on your roster, it represents the entire Defense and Special Teams units, which are made up of many players, for one of the 32 teams. All other players you select will be individual players.

How players score points isn’t important right now, you can worry about that later. For those of you familiar with fantasy football, this application will operate under standard scoring rules.

Fantasy teams will have 16 members, but only 9 of them will actually count towards scoring in the weekly matchup against another fantasy team. The other 7 will remain on the team’s “bench”, and any points they score will not count towards your team’s total that week. Your team, and every team in the league, will select players in a “fantasy draft” – more about that later.

Your fantasy team’s 9-player active roster will have 1 Quarterback (QB), 2 Runningbacks (RB), 2 Wide Receivers (WR), 1 Tight End (TE), 1 Flex (which can either be a RB, WR, or TE), 1 Kicker (K), and 1 Defense/Special Teams (D/ST).

An example 16-man roster might look like the following:

Active Roster

QB: Aaron Rodgers (GB)

RB: Adrian Peterson (MIN)

RB: Arian Foster (HOU)

WR: Calvin Johnson (DET)

WR: A.J. Green (CIN)

TE: Jimmy Graham (NO)

FLEX: Ray Rice (BAL)

K: Stephen Gostkowski (NE)

D/ST: Seattle Seahawks

Bench

TE: Rob Gronkowski (NE)

RB: Marshawn Lynch (SEA)

WR: Brandon Marshall (CHI)

QB: Matt Ryan (ATL)

RB: C.J. Spiller (BUF)

K: Blair Walsh (MIN)

D/ST: Chicago Bears (CHI)
Recap

Hopefully some of that stuck, but if a lot of it went over your head, don’t worry. As you build the application, you will begin to understand much more clearly how fantasy football works. Let’s get started!
Getting Familiar With the MEAN Stack

We’ve provided the starting point for the application on github: https://github.com/msfrisbie/mean-stripdown.
Check these boxes to keep track of your progress

Clone the application with git clone https://github.com/msfrisbie/mean-stripdown.git

Install Node.js: http://howtonode.org/how-to-install-nodejs

The application uses Node.js and MongoDB, make sure you have those installed.

Install MongoDB: http://docs.mongodb.org/manual/installation/

Install the app dependencies with npm install

With all this set up, you should be able to run the application! From the mean-stripdown directory, running node server should start an ExpressJS node server on port 3000. Navigate to localhost:3000, and you should see the skeleton app working!
What Am I Actually Dealing With Here?

Before you actually get your hands dirty, familiarize yourself with what is provided in the skeleton application.

App Directory

This contains all the files involved in server-side program flow. Your directory structure will look like this:

app
├── controllers
│ ├── index.js
│ └── users.js
├── models
│ └── user.js
└── views
├── 404.jade
├── 500.jade
├── includes
│ ├── foot.jade
│ └── head.jade
├── index.jade
├── layouts
│ └── default.jade
└── users
├── auth.jade
├── signin.jade
└── signup.jade

The server is using the Jade templating engine to render views. You won’t need to worry about this too much right now, as none of your AngularJS templates will be done in Jade. You see two controllers, one user model, and a bunch of views provided for you.

The default.jade, foot.jade, and head.jade views are the ‘wrapper’ templates for the application, which surround the AngularJS templates. Looking through these should be pretty self-explanatory.
Authentication and the Execution Environment

You might be asking yourself, “Matt, isn’t this an AngularJS tutorial? Why is the server handling all these views?”

The answer lies in application security. AngularJS, by itself, cannot be used to securely authenticate a user. AngularJS exists entirely in the browser’s JavaScript execution, and therefore it must be assumed that the user has complete control over the execution environment. The user is able to modify any part of the code you provide to them, and so authentication cannot be solely handled by the browser, there must be a remote server aspect to it.

The stack provided sets this up nicely. The server uses PassportJS, cookies, and a User model to authenticate users in a standard fashion. The auth.jade, signin.jade, and signup.jade views, along with the users.js controller, are all part of this. The server provides the browser with a cookie to identify the user session, and every transaction with the server after that will use that cookie to identify the user, not by anything Angular will provide.

Now you might be asking, “OK Matt, that’s all well and good that the server’s authentication is squared away, but now how does AngularJS know the user is authenticated?”

Good question! You’ll start by examining index.jade:
app/views/index.jade

extends layouts/default

block content
section(ng-view)
script(type=”text/javascript”).
window.user = !{user};

When rendering this template, if the user has authenticated, the user object can be interpolated into the view. When passed to the client, the user object is attached to the window object, and is now available to your JavaScript as window.user. When the user has not authenticated, the window.user object will be null, and everything still works.
Getting Into AngularJS

You won’t stop with the window.user object, though. Even though this object is available globally, using this throughout the application to handle authentication introduces a bit of code smell. Since you won’t need to use the user object everywhere, but you’d like to use it in a *lot* of places, this seems like the perfect opportunity to write your first service.

Public Directory

This contains CSS, images, libraries, and all your AngularJS files and views. Your directory structure will look like this:

public
├── css
│ ├── …
├── img
│ ├── …
├── js
│ ├── app.js
│ ├── config.js
│ ├── controllers
│ │ ├── header.js
│ │ └── index.js
│ ├── directives.js
│ ├── filters.js
│ ├── init.js
│ └── services
│ └── global.js
├── lib
│ ├── angular
│ │ ├── …
│ ├── angular-bootstrap
│ │ ├── …
│ ├── angular-cookies
│ │ ├──…
│ ├── angular-mocks
│ │ ├── …
│ ├── angular-resource
│ │ ├── …
│ ├── angular-route
│ │ ├── …
│ ├── angular-scenario
│ │ ├── …
│ ├── bootstrap
│ │ ├── …
│ ├── jquery
│ │ ├── …
│ ├── json3
│ │ ├── …
├── robots.txt
└── views
├── header.html
└── index.html

The lib directory contains angular.js proper, and also modules that you will list as dependencies for your application.

All the AngularJS files you will modify live in the js/ directory. Views obviously live in the views directory.

app.js attaches an angular instance to the window as a window.app object, and defines module dependencies.

config.js sets up routing and other configuration options

The controllers/ directory contains all your application controllers, separated into their own files. You will continue this separate file convention as the application grows.

The services/ directory contains all your application services

The directives.js and filters.js will contain your directives and filters, respectively. These will eventually be broken out into multiple files.

init.js serves to provide some setup configuration. In it, you will notice that the application is manually bootstrapped, as opposed to declaring the app in a view with ng-app
Writing The Authentication Service

Before proceeding, make sure you are familiar with how services work in AngularJS. If you need a refresher on Angular services, read through Part 11: Under the Hood.

Additionally, read through Part 13: $http and Server Interaction.
Global Service

Recall that you are trying to convey authentication information to AngularJS cleanly. You will start with the global.js file, which is basically empty:
public/js/services/global.js

window.angular.module(‘ngff.services.global’, [])
.factory(‘Global’, function(){

});

This Global service will return an object which you can use to identify the user, as well as ascertain if a user is logged in or not. Since login/logout is a synchronous server action, this service will be refreshed each time the authentication state changes. Therefore, you are able to directly grab the window.user object, and use that to convey authentication to the AngularJS application.

Change global.js to match the following:
public/js/services/global.js

window.angular.module(‘ngff.services.global’, [])
.factory(‘Global’, function(){
var current_user = window.user;
return current_user;
});

Great! It returns the current_user object in the service, so injecting the Global service in your application will give us access to that user object. This would work fine, but you’d like to encapsulate the current_user object so the application doesn’t directly access it.

Let’s instead refactor it to return an object with methods that indirectly interact with the current_user object:
public/js/services/global.js

window.angular.module(‘ngff.services.global’, [])
.factory(‘Global’, function(){
var current_user = window.user;
return {
currentUser: function() {
return current_user;
},
isSignedIn: function() {
return !!current_user;
}
};
});

Now that you have created a service, you need to add it as a dependency to the application.
Setting Up Application Dependencies

You will see the following in app.js:
public/js/app.js

window.app = angular.module(‘ngFantasyFootball’, [‘ngCookies’, ‘ngResource’, ‘ui.bootstrap’, ‘ngRoute’, ‘ngff.controllers’, ‘ngff.directives’, ‘ngff.services’]);

// bundling dependencies
window.angular.module(‘ngff.controllers’, [‘ngff.controllers.header’,’ngff.controllers.index’]);
window.angular.module(‘ngff.services’, []);

Add the new module dependency ‘ngff.services.global’ to the ‘ngff.services’ module:
public/js/app.js

window.angular.module(‘ngff.services’, [‘ngff.services.global’]);

At this point, you are able to create a user in the application, and sign in. When you’re signed in, use your browser’s console to check the value of window.user. You should see the user object. Navigating to /signout (not #!/signout, an important difference) and checking window.user should show it to be null.

Terrific! The first part of authentication is taken care of.
Dependency Injection in a Controller

You should be comfortable with at least basic concepts in AngularJS controllers and dependency injection. For a refresher, read through Part 2: Taking It for a Spin.

Also, the tutorial will dive right into Angular scope. For a review, read through Part 5: Scope.

Now that you have this global service working, let’s actually make it do something. header.js contains the controller for the header bar. It will also be empty:
public/js/controllers/header.js

window.angular.module(‘ngff.controllers.header’, [])
.controller(‘HeaderController’, [
function() {

}]);

You can’t use the Global service in the view without attaching it to the scope.

Inject both of these into the controller:
public/js/controllers/header.js

window.angular.module(‘ngff.controllers.header’, [])
.controller(‘HeaderController’, [‘$scope’, ‘Global’,
function ($scope, Global) {

}]);

Recall that the order in which these are listed does not matter, as Angular’s dependency injection will evaluate them individually, regardless or parameter ordinality.

Use the injected service object, and attach it to the scope:
public/js/controllers/header.js

window.angular.module(‘ngff.controllers.header’, [])
.controller(‘HeaderController’, [‘$scope’, ‘Global’,
function ($scope, Global) {
$scope.global = Global;
}]);

Using The Controller in the View

You’ll notice that, even when you’re signed in, that you can still see the Sign In and Sign Up buttons in the header bar. This doesn’t quite make sense, and you’d like to instead show the user’s name, and a way to sign out. Take a look at public/views/header.html:
public/views/header.html

You see that the two visibility directives, ng-show and ng-hide, are set to empty conditionals. Also, you see that the user dropdown is set to show just ‘User’.

Make use of the service you just built, and fill in the directives:
public/views/header.html