Welcome! Thanks for trying out hapi pal. Today, we're going to show you how to start a project with pal, explaining the component parts and a bit of the theory behind them along the way.
We assume intermediate JavaScript experience, with at least a basic grasp of working with nodejs, and familiarity with server-side web development (e.g. routing, models, database management, etc.). We won't cover these concepts here, just how to implement them.
That assumed, this guide is likely, though not exclusively, most useful for:
- Someone familiar with hapi, looking to improve their architecture or sharpen their tools
- Someone with zero experience with hapi who'd like a gentle introduction to working with the framework, for example to vet it as their next project's web server framework
If you're here and you've read this far, you're probably at least a little bit interested in some of this, so we encourage you to read on—you might still find the below interesting, if not useful.
A friendly, proven starting place for your next hapi plugin or deployment
Pal isn't a framework, but a pre-designed architecture for a hapi project and suite of tools that help keep you focused on getting real work done with hapi, instead of worrying about bigger, hairier existential questions like project structure, scalability across teams and servers, and portability, to name a few. In short, our purpose is to ease you into working with hapi, showing how to become productive with the framework quickly while ensuring that your project rests on a solid, scalable foundation.
There are, of course, a million ways to skin this particular cat. Pal is opinionated. You might have differing opinions on how to setup your projects. That's all good! Our tools are designed to fit into existing projects or be adopted progressively over time. If you're at all interested in learning or further mastering hapi, we encourage you to keep reading. While this tutorial will explain hapi pal's philosophy and organization, it will also give you hands-on knowledge of the hapi framework by easing you into its basic features. You can then transfer this knowledge and theory to any hapi project, regardless of whether you keep using pal.
If nothing else, we hope to show you how to be productive with hapi. If you like using pal, well, of course we'd be pumped about that too!
We'll work through an example application, available in the hapipal/examples repository, here.
In our example we'll follow our dear Paldo, a playful heart who likes to share riddles with friends.
We'll help Paldo build out and grow a project—conveniently for us, a web server—to scratch this itch. Paldo will begin by serving random, hard-coded riddles to their friends; perform some refactoring; move on to allow friends to look-up answers to riddles; then incorporate a SQL database to back all this juicy riddle data.
First things first, we need to setup a base pal project. Run the following:
npm init @hapipal paldo-riddles
cd paldo-riddles
npm install
# make your first commit to init project history
git add --all
git commit -m "Initial commit"
On running npm init @hapipal paldo-riddles
, you'll be prompted with the npm init
dialog, where you can enter details about your project that will go into its package.json
file. Feel free to take the time to fill-out the details, or just "enter" all the way through—either is fine for the purposes of this tutorial.
You now have a base pal project directory ready to go!
npm init @hapipal paldo-riddles
calls hapi pal's command line utility hpal to bootstrap a new project in a directory titled paldo-riddles
in our current working directory (the argument to new
is a path).
We'll cover more on hpal
in just a bit.
You should now be sitting in a directory that looks like this:
paldo-riddles/
├── lib/
│ ├── routes/
│ ├── .hc.js
│ └── index.js
├── node_modules/
├── server/
│ ├── .env-keep
│ ├── index.js
│ └── manifest.js
├── test/
│ └── index.js
├── .eslintrc.json
├── .gitignore
├── .npmignore
├── package-lock.json
├── package.json
└── README.md
Don't worry about understanding the anatomy of all this just yet—we'll talk about the directory structure as we go!
Now, give the following a spin:
npm start
You should see:
> paldo-riddles@1.0.0 start /your/local/path/paldo-riddles
> node server
Debug: start
Server started at http://localhost:3000
If you then visit that address in your browser or cURL it (curl http://localhost:3000
), you should receive the following:
{
"statusCode": 404,
"error": "Not Found",
"message": "Not Found"
}
And that's exactly what we want for now! Everything's working and setup, congrats! Now, for something more interesting.
Behind this simple call to npm start
, some important steps are taken to configure and start your server. The most important things to know for now is that hapi is deeply configuration-oriented, and that the server/
directory is where configuration related to your deployment lives. We distinguish your deployment from the guts of your application, which live in lib/
, for all sorts of useful reasons that we lay-out in a separate article "The joys of server / plugin separation".
We won't do anything too complex with configuring our server here, but pal comes with a couple of tools for configuring our server and its attendant plugins:
-
The server manifest (
server/manifest.js
)This is a document describing the options we apply to our hapi server, including the various hapi plugins we register on it. Technically it represents a Glue manifest, which will be used to compose our server based upon server, connection, and hapi plugin configurations. It utilizes hapi's Confidence package, essentially a dynamic, filterable configuration document format, in order to cleanly adjust the server's configuration based upon environment variables.
-
The environment file (
server/.env
)This is a file for storing environment variables (recommended by the 12-factor methodology for storing configuration); we use the dotenv library to parse this file's contents into node's
process.env
, then utilize those variables in our server configuration as needed.
The basic process for configuring our server:
- Make a copy of
.env-keep
named.env
.- Do not commit this file! Keep it local, as it's the place where you'd keep sensitive information, like API keys or other credentials. By default it's listed in the
.gitignore
file, so it won't be tracked.
- Do not commit this file! Keep it local, as it's the place where you'd keep sensitive information, like API keys or other credentials. By default it's listed in the
- Specify dynamic (deployment-specific) configuration in that file.
- Be sure to keep
.env-keep
up-to-date with placeholders for each environment variable your application uses. That way, the next person who clones your project will know which credentials need to be filled-in.
- Be sure to keep
- Reference and work with those variables in our server manifest.
A simple example:
We can add the following to our .env
file:
PORT=4000
Now, let's take a look at our manifest. Near the top, we see:
//...
port: {
$param: 'PORT',
$coerce: 'number',
$default: 3000
},
// ...
The $param
Confidence directive uses the parameters passed to the manifest within server/index.js
, which in this case is process.env
(i.e. Manifest.get('/', process.env)
). In other words, we're pulling in process.env.PORT
to determine the value set to the current property, port
— which, following the specification of a Glue manifest, represents the server.options.port hapi server option. When process.env.PORT
isn't set Confidence brings the $default
of 3000
into play: that's why the first time we started the server we saw it running on port 3000 ("Server started at http://localhost:3000"). Finally, because environment variables are always technically strings, Confidence allows us to $coerce
the value to a number so that it becomes valid hapi configuration for a port, as hapi wouldn't accept a string here.
To translate: because we configured PORT
as 4000
in the server/.env
file, our server is now configured to serve requests on port 4000
rather than the default of 3000
.
For the rest of this tutorial, we'll switch back to the default port 3000 (deleting the .env
file or commenting out the PORT
setting therein), but you're welcome to keep your server configured as-is.
There's more to Confidence, but the gist is that the hapi pal configuration setup allows us to not just set configuration in the environment, but conditionalize our hapi server configuration based upon the environment with minimal overhead. As with everything else we gloss over here, we encourage you to read more if you're still curious.
The only riddle to our current 404
message is "Why would Paldo care?" And the answer is, to say the least, uninteresting.
As we know, Paldo wants to share riddles with friends. They don't need anything fancy to start, just a way to get off the ground! And riveting suspense in classic fantasy literature aside, refusing to answer riddles is plain cruel. Of course, Paldo wants to offer their friends reprieve if they really, really tried but can't crack these riddles, so Paldo will also need a way to give answers.
The simplest way to do all this? A couple of quick and easy routes.
hpal
helps us out here, too. It can generate a route template we can simply fill in.
npx hpal make route riddle-random
You should see Wrote lib/routes/riddle-random.js
printed back. That file now exists in our project. It should contain this basic route template:
'use strict';
module.exports = {
method: '',
path: '',
options: {
handler: async (request, h) => {}
}
};
The file exports a hapi route configuration object (or may export an array of them). hapi pal's directory and file structure is governed by a tool called haute-couture, which you can see is used in your project at lib/index.js
. When you place a file in the routes/
directory, as hpal did for us here, it will automatically be added to your application plugin because haute-couture will make the call to server.route()
for you! The same can be said for other plugin functionality—you'll find that models go in models/
, authentication strategies go in auth/strategies/
, etc.
But for now we need to outfit lib/routes/riddle-random.js
so it allows Paldo to broadcast a riddle, chosen at random from the complete archives, to any friends interested in a brain-teaser.
That might look like the following:
'use strict';
module.exports = {
method: 'get',
path: '/riddle-random',
options: {
// Our handler doesn't need to do anything asynchronous or use the
// response toolkit, so the route handler's signature appears a little simpler than before
handler: (request) => {
// We define some riddles, hardcoded for now
const riddles = [
{
slug: 'no-body',
question: 'I have a head & no body, but I do have a tail. What am I?',
answer: 'A coin'
}
// etc.
];
// And we reply randomly
const randomIndex = Math.floor(Math.random() * riddles.length);
const randomRiddle = riddles[randomIndex];
return `${randomRiddle.slug} — ${randomRiddle.question}`;
}
}
};
Be sure to restart your server in order to pick-up this new code.
If you cURL our new route (curl http://localhost:3000/riddle-random
) or visit it in your browser, we'll see one of Paldo's riddles. We're up and running!
Now, let's setup letting people get answers if (well, when :)), they get stumped. We'll rely on Paldo's friends supplying the slug
of the riddle they're stuck on (for now) to know which answer to supply.
First, we setup the route:
npx hpal make route riddle-answer
Alternatively, you can convert our first route's export to an array of route objects since hapi's server.route() accepts both a single route object or an array. In this tutorial, we'll store one route per file, but we encourage you to experiment with what organization works for you. We do find that 1. it's convenient to have the handler inline with the rest of the route config and 2. it becomes cumbersome to maintain multiple handlers in the same file, which leads us to typically have a single route config and handler per file.
Moving on!
Immediately, we see that our strategy of hardcoding our riddles within our first route's handler is, although expedient, unworkable. Our other routes will need to know about that data (let alone any other pieces of our riddle-sharing application we build later). So, let's centralize it.
For our purposes, we'll create a file called data
under lib
and set it up to export our riddles.
// lib/data.js
'use strict';
exports.getRiddle = (slug) => {
const bySlug = (riddle) => riddle.slug === slug;
return exports.riddles.find(bySlug);
};
exports.riddles = [
{
slug: 'no-body',
question: 'I have a head & no body, but I do have a tail. What am I?',
answer: 'A coin'
}
// etc.
];
Now, we require this file in any route that needs to know about our riddles. Our first route now becomes much simpler.
// lib/routes/riddle-random.js
'use strict';
const Data = require('../data');
module.exports = {
method: 'get',
path: '/riddle-random',
options: {
handler: (request) => {
const randomIndex = Math.floor(Math.random() * Data.riddles.length);
const randomRiddle = Data.riddles[randomIndex];
return `${randomRiddle.slug} — ${randomRiddle.question}`;
}
}
};
And now, our new route:
// lib/routes/riddle-answer.js
'use strict';
// Boom builds Error objects for hapi that represent HTTP errors
const Boom = require('@hapi/boom');
const Data = require('../data');
module.exports = {
method: 'get',
path: '/riddle-answer/{slug}',
options: {
handler: (request) => {
const { slug } = request.params;
const riddle = Data.getRiddle(slug);
// array.find() returns undefined when unsuccessful
// In that case, we give the client an HTTP 404 error
if (!riddle) {
throw Boom.notFound('Sorry, that riddle doesn\'t exist (yet)');
}
return riddle.answer;
}
}
};
After restarting your server, we may cURL our new route with a slug
(curl http://localhost:3000/riddle-answer/no-body
) or visit it in your browser and we'll see the answer associated with the slug
.
We're going to dive into more complex, glamorous stuff in a sec. But, since we've written some chunks of code and are about to write a whole lot more, let's quickly cover how to lint our work.
Pal uses eslint and comes outfitted with a hapi-specific eslint configuration that you're welcome to extend or customize.
We can take advantage of this in a couple of ways.
Run npm run lint
—this executes eslint with our config on all files not ignored by npm (see the project's standard .npmignore
)—and then fix whatever warnings and errors it spits out (or if you prefer eslint's automatic fixing: npm run lint -- --fix
).
Batching lint errors in this way provides you a quick and clear punchlist of lines to clean-up before committing and pushing your code.
Many modern IDEs have eslint plugins that will detect any eslint configuration files in your project and shout out as you break whatever rules you've setup as you code.
If you don't mind the near-constant noise, this can be a helpful way to stay on top of linting errors.
Take a peek at this list of editors with eslint integrations.
So, with that in hand, lint away, tidy up, then keep moving.
In each of our routes, we did some work to set and move around our hard-coded riddles data. For all but the simplest applications, this strategy for storing data isn't workable. We'll want to setup an actual database, store our riddles there, then give Paldo some tools for managing their riddle data themselves, instead of having to ask us to do the silly manual hard-coded updates.
Pal has assembled a few packets of additional tooling and functionality that we refer to as flavors: these are things that you can apply to the baseline pal boilerplate to help build out specific types of projects.
Flavors are really just tagged commits that you can git cherry-pick
. They're intentionally small, a couple of short configuration files / file modifications and a few new dependencies, tops. Just like the basic pal setup, we give you just the scaffolding—tooling and directory structure—to get started, guiding you to writing your own code quickly.
hapi pal's tool of choice for database management and querying is Objection ORM, which we've integrated with hapi via the schwifty plugin.
Objection ORM is an impressive SQL query-builder with a fantastic community, built on top of knex. We find that it keeps us in control of our data access, and allows us to drop down to low-level database features as needed. Schwifty integrates Objection into hapi by ensuring the database is available when the server starts, closing database connections when the server stops, pluginizing knex migrations, and making models available where it is most convenient.
So, let's pull in our Objection flavor.
If you used the hpal CLI to start your project as described above, run:
git cherry-pick objection
If you cloned the pal repo (rather than using npm init @hapipal paldo-riddles
), you'll need to fetch the tagged commits first:
git fetch pal --tags
git cherry-pick objection
Expect to resolve small merge conflicts when pulling flavors in, typically just in package.json
and server/manifest.js
having to do with overlapping dependencies in HEAD and the flavor.
Most of what just got pulled in is relatively simple, but worth a quick review:
- The
objection
,schwifty
,knex
, andsqlite3
packages.-
objection
is the SQL-oriented ORM described above. -
sqlite3
is a light-weight SQL database engine that we'll use to test our work here. -
knex
handles database connections and provides the core query-building functionality to Objection. -
schwifty
is the hapi plugin described above, allowing knex, Objection, and hapi to all play nice together.
-
-
knexfile.js
is a configuration file that the knex CLI will use to know how to connect to our database. We use the knex CLI to create new migrations and manually run migrations. -
lib/migrations/
andlib/models/
are where we keep our database migration files and models, respectively; we'll write some in just a minute! As with most things in pal, put those resources in the folders created for them and haute-couture takes care of the rest.
We left off the slightly more nuanced point: lib/plugins/@hapipal.schwifty.js
vs. the @hapipal/schwifty
plugin added to server/manifest.js
.
The difference, to keep things simple here, is a matter of scope. Our application is implemented as a hapi plugin in lib/
. That plugin depends on schwifty in order to define some models, so it registers schwifty by placing the file lib/plugins/@hapipal.schwifty.js
. Our hapi server is located in server/
, which is where all the nitty-gritty configuration concerning our deployment should live. In particular, our database configuration can be specified there by registering schwifty in server/manifest.js
.
In this way, our plugin (lib/
) can travel around to different servers if it needs to, and never worry about all the hairy deployment details, such as database credentials: schwifty will ensure our plugin finds the relevant database connection provided by knex, and bind it to our models. On the flip side, we can also set plugin-specific configuration like migrationsDir
—used by knex to determine which directory to check for the plugin's migration files—out of our deployment's configuration. Nice!
For a deeper look at this independence of plugin and deployment, take a peek at "The joys of server / plugin separation" (also linked earlier regarding server configuration).
Did we mention that you can deploy multiple plugins together, each with their own independent migrations?! If, for example, we add another plugin to our server that just so happens to use schwifty, we wouldn't have to care at all about the two colliding. Our plugin in lib/
would keep using its configured migrations directory and the new plugin could also use its own, whether they share the same database or use separate databases.
One final point on server/manifest.js
—let's quickly peruse how we've configured our base database connection:
// ...
$base: {
// This is a schwifty option that sets our server to automatically run our migrations on server start, bringing our database up to date
migrateOnStart: true,
knex: {
client: 'sqlite3',
useNullAsDefault: true, // Suggested for sqlite3
connection: {
filename: ':memory:' // You may specify a file here if you want persistence
},
migrations: {
stub: Schwifty.migrationsStubPath
}
}
}
// ...
The main takeaway from here is that, out of the box, we get an in-memory database. This is just fine for our purposes as our data doesn't particularly matter (sorry, Paldo!), so it's okay for it to disappear every time our server shuts down. Just be aware to not expect any of the data we setup in the rest of the tutorial to hang around. In our examples, we'll act as if our data is reliable and persistent.
If you'd rather not keep recreating riddles, you may set
filename
to a path to a file to which SQLite3 will write your data.Create a file with extension
.db
(the extension doesn't matter to SQLite;.db
is just a common convention). We'll call oursriddles.db
. Then, update your manifest as follows://... $base: { migrateOnStart: true, knex: { client: 'sqlite3', useNullAsDefault: true, connection: { filename: 'riddles.db' // relative path to a sqlite database file (relative to the directory in which you run the command to start the server) }, migrations: { stub: Schwifty.migrationsStubPath } } }In fact, there's already a sqlite database, prepopulated with a handful of riddles, available in the example application repo. As an exercise for the reader, try setting
filename
with an environment variable (as would usually be done in a production deployment (and how the examples repo is setup))NOTE: using the prepopulated database may error because the migration scripts likely don't exist.
Phew! That was a pile of words and theory! Sorry about that. Let's first check everything's still working:
# bring in our new dependencies
npm install
npm start
Good! Now we can get back to building.
This section is a bit abstract, as we won't be able to test anything just yet. We're doing the legwork to get our data-scaffolding in place so that we're set up to start doing interesting work with it.
Our job now is to,
- model the real-life objects (riddles) our client (Paldo) cares about in our system
- set up our database so we can use it to store and retrieve instances of these models
The pal CLI again helps us out here:
npx hpal make model Riddles
Should result in Wrote lib/models/Riddles.js
.
Let's break that file down:
// lib/models/Riddles.js
'use strict';
const Schwifty = require('@hapipal/schwifty');
const Joi = require('joi'); // hapi's preferred package for data validation
// Schwifty models are based on Objection's, but outfitted to use Joi
// Make sure to update "ModelName" to your model's name—
// this is how you will reference it throughout your application.
module.exports = class ModelName extends Schwifty.Model {
static tableName = '';
// Here we'll define a joi schema to describe a valid Riddle.
// Schwifty will then use this to ensure that the data we try to use
// to create/update our riddles complies with our definition of a Riddle.
static joiSchema = Joi.object({});
};
First thing's first: make sure to change your model class's name from ModelName
to Riddles
, which is how we'll reference the model throughout the application (e.g. in route handlers). Similarly, set the tableName
to whichever table you'd like to store riddles in your database e.g. 'Riddles'
.
To continue to fill this out properly, it requires some understanding of Joi, hapi's preferred data validation library. Joi is extremely expressive, as you can probably tell from its extensive API documentation. hapi route payload, query, and path parameters are also typically validated using Joi, which is why we integrated it into Schwifty's Model
class. After looking at some Joi examples, let's fill that in, then:
// lib/models/Riddles.js
'use strict';
const Schwifty = require('@hapipal/schwifty');
const Joi = require('joi');
module.exports = class Riddles extends Schwifty.Model {
static tableName = 'Riddles';
static joiSchema = Joi.object({
id: Joi.number().integer(),
slug: Joi.string(),
question: Joi.string(),
answer: Joi.string()
});
};
With the above changes, we've just declared:
- We care about Riddles objects and will store them in a table of the same name.
- Riddles have a slug, question, and an answer, all of which must be strings.
- Riddles have a numeric id.
Now let's get that model into our database. To do that, we use knex migrations. You can read more here, but, basically, the task is using knex's schema builder to describe the modifications to our database needed to store the model we just described. (Or modified! If you ever change a model, chances are good you'll need to make a corresponding change to your database via a migration.)
First, we create a migration file. We can auto-generate one with the knex command:
npx knex migrate:make add-riddles
Things to know:
- the
knex
CLI is installed with the main knex package. -
migrate:make
is described in the knex docs here. -
add-riddles
is the base name of the migration file; try to describe what this migration does to make reviewing migration history mildly easier.
If everything's going okay, you should see something like:
Created Migration: /your/local/path/paldo-riddles/lib/migrations/20180226173134_add-riddles.js
knex uses the timestamps of your migration files to reliably order migrating and rolling back.
That creates just the scaffold of a migration file. Here's our filled-in version:
'use strict';
exports.up = async (knex) => {
await knex.schema.createTable('Riddles', (table) => {
table.increments('id').primary();
table.string('slug').notNullable();
table.string('question').notNullable();
table.string('answer').notNullable();
});
};
exports.down = async (knex) => {
await knex.schema.dropTable('Riddles');
};
Essentially, we've copied the work we already did in our model, but we should note a couple of migration-specific concepts here:
-
up
anddown
- these are the actions we can take with our migrations;up
performs the migration while down is used to rollback the migration.down
should always be the inverse ofup
, bringing our database back to the state it was in prior to running the migration. -
notNullable()
- this means that these fields are required. Note that Joi's default is object properties are optional. -
increments('id').primary()
- we define an auto-incrementingid
as the primary key for each Riddle.
Moment of truth! Go ahead and run it!
npx knex migrate:latest
If all's gone well, you should see:
Batch 1 run: 1 migrations
At long last, we're ready to start working with our data.
Let's get rid of those hardcoded riddles. To recreate them, we'll give Paldo the tools to create riddles on their own.
We'll setup a route, write our first Objection query in our handler, then check our work.
Once again, do the hpal
dance:
npx hpal make route riddle-create
Then fill in the route template as follows:
// lib/routes/riddle-create.js
'use strict';
const Joi = require('joi');
module.exports = {
method: 'post',
path: '/riddle',
options: {
validate: {
// Check that the POST'd data complies with our model's schema
payload: Joi.object({
slug: Joi.string().required(),
question: Joi.string().required(),
answer: Joi.string().required()
})
},
// Our db query is asynchronous, so we keep async around this time
handler: async (request) => {
// We nab our Riddles model, from which we execute queries on our Riddles table
const { Riddles } = request.models();
// We store our payload (the prospective new Riddle object)
const riddle = request.payload;
// We try to add the POST'd riddle using Objection's insertAndFetch method (http://vincit.github.io/objection.js/#insertandfetch)
// If that throws for any reason, hapi will reply with a 500 error for us, which we could customize better in the future.
return await Riddles.query().insertAndFetch(riddle);
}
}
};
A bunch of familiar route setup, but we've also got a few new things going on here. Let's step through them:
-
options.validate
— where you place input validation rules; hapi allows various properties here for the different types of input you might allow. In our case, with aPOST
, we're looking atpayload
validation, which, just like our model, uses Joi to validate its input. hapi expects some sort of Joi schema: a plain object with properties containing Joi validations as seen above, or a full Joi schema object, like in our model (if we use a plain object, hapi will compile that object into a Joi schema for us).- Note that we have to call
.required()
on each key in this version of our schema. All Joi rules are optional by default. If we didn't require these values, they'd pass into our query, which would then fail due to a constraint violation, specifically that all of our riddle's schema's values are not allowed to be null in the database (per thenotNullable()
calls we made in our migration file).
- Note that we have to call
-
const { Riddles } = request.models()
The request.models()
method is a request decoration added by schwifty. It allows you to access the models registered by your plugin so that we can make queries against them. Just ensure that the name used here matches your model class's name: class Riddles extends Schwifty.Model {}
.
await Riddles.query().insertAndFetch(riddle)
All Objection models, and therefore schwifty models (which extend Objection models) come with the static query()
method, which translates to a SQL query for the table associated with the calling model (see Objection's explanation)
- This declares the
Riddles
table as the target of the query we're building. - Objection's API is Promise-based so we can
await
here. -
insertAndFetch(riddle)
- inserts the Riddle into the database then fetches it, including its auto-incrementedid
column.
Now, if we start our server and hit our new route...
curl -H "Content-Type: application/json" -X POST -d '{"slug": "see-saw", "question": "We see it once in a year, twice in a week, but never in a day. What is it?", "answer": "The letter E"}' http://localhost:3000/riddle
...we hopefully see our new model, sent right back to us with the id
property set on it by our database, per the primary key in our migrations file:
{"slug":"see-saw","question":"We see it once in a year, twice in a week, but never in a day. What is it?","answer":"The letter E","id":1}
Excellent! We now have a fully wired-up database capable of storing our Riddles.
We have a lot of love for cURL. Still, manually prodding our endpoints puts the onus of properly formatting our requests on us, an error-prone endeavor liable to drive you a bit nuts as you build, especially if you end-up working with more complex models.
Thankfully, we can address this issue post-haste with another flavor.
git cherry-pick swagger
# As noted earlier, you might first have to resolve small merge conflicts when pulling in flavors, typically in `package.json` and `server/manifest.js`
npm install
This sets up a Swagger interface for our application, courtesy of a fantastic hapi plugin named hapi-swagger. Now, if we mark our routes appropriately, they will appear at /documentation
, where we'll see a set of forms for each route where we can hit our routes and enter data directly without manually formatting it.
To mark our routes, add the following tags
entry to each route config:
module.exports = {
method: 'post',
path: '/riddle',
options: {
// Swagger looks for the 'api' tag
// (see https://hapi.dev/api/#-routeoptionstags)
tags: ['api'],
validate: { ... }
}
// etc.
};
Now, if we start up our server and go to http://localhost:3000/documentation, we can see all our routes and can test them from there, as an alternative to cURLing. This is totally a nice-to-have, just simplifies our testing live a bit.
Having made Paldo's Riddles a bit more flexible and dynamic, let's clear out our hardcoded work we put into place earlier. We can delete the lib/data.js
file altogether, since we'll be storing new riddles in the database by making calls to POST /riddles
.
In fact, let's delete our riddle-answer
route too, replacing it with a route for getting all details about a specific Riddle. We do that to move this to a simpler interface, which allows interaction with entire resources, not just pieces of them.
Feel free to git commit
before removing these files, so that you can look back at all the work you've done later!
rm lib/data.js lib/routes/riddle-answer.js
npx hpal make route riddle-by-id
We end up with this:
// lib/routes/riddle-by-id.js
'use strict';
const Boom = require('@hapi/boom');
const Joi = require('joi');
module.exports = {
method: 'get',
path: '/riddle/{id}',
options: {
tags: ['api'],
validate: {
params: Joi.object({
id: Joi.number().integer()
})
},
handler: async (request) => {
const { Riddles } = request.models();
const { id } = request.params;
const riddle = await Riddles.query().findById(id);
if (!riddle) {
throw Boom.notFound('Sorry, that riddle doesn\'t exist (yet)');
}
return riddle;
}
}
};
The only new thing is really that we're now validating path parameters instead of a payload, but the core ideas are essentially the same.
Finally, we'll need to refactor our riddle-random
route, so it doesn't depend on our defunct lib/data.js
. This ends up being a bit more complex than originally, given that we no longer trivially know how many riddles comprise the range of our random selection.
// lib/routes/riddle-random.js
'use strict';
const Boom = require('@hapi/boom');
module.exports = {
method: 'get',
path: '/riddle-random',
options: {
tags: ['api'],
handler: async (request) => {
const { Riddles } = request.models();
// Count all Riddles
const count = await Riddles.query().resultSize();
// The only case that we can't find a riddle is if there aren't any in the DB
if (count === 0) {
throw Boom.notFound('Looks like we don\'t have any riddles. Sorry!');
}
// Use the total riddle count to determine a random offset
const randomOffset = Math.floor(Math.random() * count);
// Grab the Riddle at that random offset
const randomRiddle = await Riddles.query().offset(randomOffset).first();
return randomRiddle;
}
}
};
Ok, let's boot-up and test! Assuming we used POST /riddle
to create some riddles, /riddle/1
will return the first riddle we created and /riddle-random
will behave as it did previously.
Hey, this is a pretty good start for Paldo—good work! As you can see, there's a lot out there to explore in both the hapi-verse and pal-verse. We hope this is a good starting point to dive deeper into the features and documentation of the various tools that pal has incorporated together. Here we leave you with a list of resources, not to be overwhelming—we know you can be productive while mastering the toolset—but to be encouraging: the community has created some incredible tools for you to use!
- hapi - the hapi API docs are an amazing resource worth keeping nearby.
- the pal boilerplate - this is the baseline setup for pal projects, including a nice setup deployment, testing, linting, and pluginization of your application. It also offers a handful of "flavors", which helped us more easily integrate Swagger documentation and a SQL-backed model layer.
-
hpal - this is the command line tool we used to start a new project, create routes in
routes/
, and models inmodels/
. It does much more too—you can also search documentation with it from the command line, for example:hpal docs:schwifty request.models
. -
haute-couture - this is used by the pal boilerplate to enforce the directory structure for your hapi plugin (everything in
lib/
). - joi - this is the validation library of choice for hapi projects, since it integrates nicely into hapi itself.
-
schwifty - this is the hapi plugin that helps you easily use a SQL database in your project.
- Objection ORM - this is the ORM supported by schwifty. We love it because it's a powerful SQL query builder that enables us to express queries in a natural way, and has a wonderful community.
- knex - this provides database connections to Objection models, governs database migrations, and has a useful CLI utility.