My thoughts on Fabric after a week of use.
Fabric is a Python library for running shell commands locally or remotely.
Python programmers can use that language to write shell scripts. Thus utilizing the syntax of Python to control the flow of logic through the script while using the proven shell commands to manage the system. Fabric helps you do this with its built-in functions:
local
Run a command on your local system.run
Run a command a remote system.lcd
Change the local working directory.cd
Change the remote working directory.
To backup the latest version of a local file use:
local('cp -u ~/fabfile.py ~/backup')
(the -u
update option tells the system to copy only if the source file is newer).
To run a command in a specific directory on all the remote servers just say:
with cd(code_dir):
run("git pull")
fab
Fabric provides a command-line tool called fab
.
When invoked, fab
looks in the current directory and up through its
ancestors for a file called fabfile.py
. This file should contain the
Fabric commands/tasks which you have written.
The command fab --list
will show all available tasks.
The list of remote servers can be set (in env.hosts
) and overridden
(e.g. @hosts('localhost')
) in the fabfile.
Writing Reusable Code
To reuse code in several projects, common structures and processes must be adopted. In general, projects have the following lifecycle:
- Install the application code from its remote repository.
- Setup a local development environment.
- Deliver the changed application code to the remote repository.
- Deploy the changed application code to the live server(s).
Each step then has a corresponding task in the project's fabfile:
install
, setup
, deliver
and deploy
.
Much of the logic used to setup a development environment could be reused when deploying onto remote servers. In order to do so, equivalent local and remote functions need to be made available in the fabfile with the same names. For instance:
def get_functions(remote):
"""
Return functions which set the current working directory,
execute commands and check for the existence of files
either remotely or locally.
"""
if remote:
return (cd, run, exists)
else:
return (lcd, local, lambda path: os.path.exists(path))
Common sections of code then begin with a call to get these functions depending on whether local or remote processing is being done.
The projects also need to use a common directory structure and all of the processing logic needs to operate relative to the root (topmost) directory.
With these prerequisites in place the following common logic can be used:
def create_directories(self, dirs, remote=False):
"""Create any missing local directories."""
cwd, ex, exists = get_functions(remote)
root_path = self.live_root_path if remote else self.dev_root_path
print('# Creating any missing directories')
for directory in dirs:
path = os.path.join(root_path, directory)
if not exists(path):
print(yellow('# Creating directory %s' % (path)))
ex('mkdir %s' % (path))
Django Commands
By having a fabfile in the topmost directory of the project and
writing a task to run Django commands, you can then execute them from any
directory in the hierarchy.
Using the following task, the command fab manage:test
will run the Django test management command.
@task
@hosts('localhost')
def manage(*args):
"""Locally execute Django command."""
with settings(warn_only=True):
with lcd(DJANGO_PROJECT_PATH):
local('python manage.py %s' % (' '.join(args)))