Consistency is important. Make sure that what you think you're building is actually what you're building. If you're developing and testing in a Window's environment but are intending to deploy to a Linux environment you're going to get unexpected results. If you're using Python 2.7 on your development machine and Python 3 on your production environment you're not going to have a good day. If you're using different package and library versions on your development and production environments you're likely in for trouble.
Development and Target Platforms
Your development platform is the environment you're writing and debugging your code in. This might be a Windows machine or it might be a Raspberry Pi that you SSH into.
The target platform is the environment your code is intended to run on when it's finished often referring to as in production. This might be an Amazon EC2 instance, a Linux server, or a Raspberry Pi.
Regardless of your development and target platforms, you want to make sure what you're building is going to run the same way during development as it does in production. There are a few factors that need to be considered when it comes to runtime consistency.
The period during which code executes is often called its runtime. Coincidentally, for interpreted languages, the interpreter and/or collection of libraries that execute the code is also called a runtime.
For Python, each various version of Python (eg. 2.7, 3) constitutes a runtime.
Facets of a Platform
- Operating System
- Windows, Linux, MacOS, etc
- Python 2.7, Python 3, .NET 4, etc
- pySerial, TkInter, etc
Operating Systems are both very similar and very different. For the most part, they provide a common set of features such as IO, scheduling, memory and resource management, etc. How they provide access to these mechanisms varies widely across the different OS flavors. These differences for performing common operations often requires strikingly different implementations in cross-platform languages to accomplish the same thing.
For higher level operations, such as showing a graphical user interface, developers unaffiliated with the actual language typically target a specific platform during development which creates a library that works on one Operating System but not the others. Other developers come along and create similar libraries targeting the other platforms but now there are multiple libraries achieving the same objective with widely different implementations.
This is why many community or democratically driven languages often lead to duplicative libraries.
Runtimes, as mentioned above, are the collection of libraries, packages, and interpreters that provide for the execution of code. Different runtimes have dramatically different characteristics and available features. The .NET runtime is markedly different from the Java and Python runtimes.
Generally, runtimes of a given language tend to be similar to one another and often backwards compatible. .NET 2 builds on top of .NET 1, .NET 2.5 adds to .NET 2, so on and so forth. However, this is an assumption and not always true. Python 3 forks from Python 2.7 in a number of ways meaning code that executes in the Python 2.7 runtime will very likely not work in Python 3.
Runtimes are strongly dependent on the executing Operating System as described above by the differing system call implementations. Windows 10 might provide a certain system call to read input from the keyboard while MacOS X 10 provides something completely different. For a language to be cross-platform the language must have multiple runtimes targeting different operating systems.
Generally, runtimes abstract away differences between one operating system and another. For example, in C++ you use cout to write to the screen regardless of the operating system. Some languages are better at this abstraction than others.
Libraries live on the bottom of the totem pole. They must abide by the whims and fancies of runtimes and operating systems. Just as runtimes depend on their host operating system so too do libraries depend on runtimes. As runtimes diverge in functionality libraries must either abstract away the differences to provide for a consistent API to developers or only with under specific conditions.
All of these factors make developing, debugging, and deploying consist code a challenge. Luckily you're not alone and others have travelled down these roads before.
When and wherever possible, limiting the number of varying factors will greatly reduce the overhead required in maintaining consistency. Developing and deploying to the same environment, focusing on one operating system, and identifying a single runtime are all ways of reducing scope. Unfortunately, some of these are harder to accomplish than others.
Development and Target Platforms
You might develop on Linux based systems and are planning on deploying to a similar systems. Perhaps you're building .NET Core applications on Windows and planning on relying on Microsoft to do the heavy lifting for cross-platform compatibility. Or you're developing on an old Windows laptop you found at a garage sale and are planning on deploying Python code to a Raspberry Pi.
The first two cases are typical and a nice place to be. You don't have much to worry about. The third is atypical and could pose a challenge:
One approach would be to develop and test on the laptop and deploy when everything is finished. This is not an answer for the reasons stated above. Python does vary between Operating Systems, especially non-Linux and Linux based systems. An obvious answer is to write code on the laptop and test it on the Raspberry Pi. This is an answer but depending on your setup it might not be the best answer. If you're writing code, transferring it to the Pi, and testing, then this is a bad answer. If you've mounted the Raspberry Pi file system in Windows and are writing and testing code then that's a lot better.
But what about runtimes and libraries? How do you make sure the runtime and libraries you're using on the Raspberry Pi are the same ones that get installed on other Raspberry Pi's?
Going the Distance
Imagine a world where you develop using the tools you're comfortable with and test on the environment you'll deploy to. Not only that, you guarantee the runtime and library that you test with is the exact same as your production environment. Shangri La.
Let's first consider virtual environments. Virtual environments allow us to create and reproduce an identical system every time. They're most useful when developing on a different platform than your target. In our running example of a Raspberry Pi project, they're a little less useful if we have a Raspberry Pi and substantially more useful if we don't.
The Rasbperry Pi's common OS, Rasbpian, is built against the ARM architecture. Most common desktop and laptops use the x86/x64 architecture which is not compatible with ARM. That means if you want to host a Raspbian virtual environment you need to emulate the ARM environment on your x86/x64 machine - not something you want to do.
Fortunately, Raspbian is based on Debian which does run on x86/x64 architecture systems. Because the OS is fundamentally the same, most runtimes, packages, and libraries that run on one run on the other making it a prime virtual environment for the Raspberry Pi.
This example focuses on developing on Windows while targeting a Raspberry Pi as our production environment. Here's what we're going to do:
- Setup a Virtual Machine
- Mount our VM file system
- SSH to the VM
- Create our Runtime "Environment"
- Install some packages
Setting up a Virtual Machine
Create the virtual machine
- Install Virtual Box
- Download the Debian ISO
- Open VirtualBox
- Enter a name
- Select "Linux" for "Type"
- Select "Debian (x-bit)" for the "Version"
- Follow the wizard
- When it asks for the Guest CD, browse to the downloaded Debian ISO
- The Virtual Machine will boot
- Follow the installation guide
- When asked for packages to install, be sure to select SSH
Configure the Network
Virtual Machines default to using the NAT network type for the emulated network connection. This is often fine but can be tedious for development. We'll switch the network connection over to a bridged network.
- Under the VirtualBox menu, click "Devices" > "Network" >"Network Settings..."
- Changed "Attached To:" to "Bridged Adapter"
- Press "Ok"
- Cycle the connection
- Under the VirtualBox menu, click "Devices" > "Network" > "Connect Network Adapter" so it is unchecked
- "Devices" > "Network" > "Connect Network Adapter" so it is checked
Finally, let's add our user to the
sudoer group. This will allow us to use sudo to perform operations as a superuser.
$ adduser <username> sudo
sudolets you perform operations as a superuser without needing to switch to the superuser using
Mount our VM file system
This is the small effort big payoff step. We're going to create a link between our project directory in Debian and our Windows machine. This is going to let us use our favorite editor on Windows (VSCode) to edit files on the Virtual Machine.
Here's what we do in Windows:
- Create a directory for your projects (ie. c:\Users\ahanson\workspace)
- Add the directory to the shared folders in VirtualBox
- Be sure and check "Auto-mount"
Install the Guest Additions on Debian:
Under the Virtual Box "Devices" menu:
- Click on "Insert Guest Additions CD Image"
- When an auto-run popup appears, click cancel
Open a terminal and do the following:
$ apt-get update
$ apt-get install build-essential module-assistant
$ m-a prepare
$ sh /media/cdrom0/VBoxLinuxAdditions.run
$ adduser <username> vboxsf
$ reboot now
su switches to the administrator user in the Debian machine.
apt-get update updates all available system packages. This is necessary to install the VirtualBox guest additions.
apt-get install build-essential module-assistant installs a couple of required packages for the VirtualBox guest additions.
My shared folder was called "workspace" - this resulted in a directory, "sf-workspace", being added to /media in the Debian virtual machine. This directory is the actual workspace directory contained on the Windows machine. Now any edits made to the files on Windows are immediately available on the Debian machine (because they're the same files).
SSH to our VM
For editing files we'll just rely on our Windows desktop and IDE/text editors. When we're ready to execute code, need to install libraries, or perform any other operations via a terminal, we can just SSH in.
First, we need the IP address of the Debian machine:
In the Virtual Machine open up a terminal and type:
$ sudo ifconfig
Now we can SSH from our Windows machine:
$ ssh <username>@<ipaddress>
You'll need to install OpenSSH on Windows to use
sshon the command line. Alternatively, you can download Putty and use Putty as your SSH client. The former is the suggested approach as it puts
sshin your path so you can use the command line or powershell.
Create our Runtime "Environment"
One of the biggest headaches when deploying code is making sure all the libraries and packages your project depends on is also installed. Missing a library or getting a version wrong can result in hours and days of headaches. Virtualenv fixes that by creating a virtual environment within a directory. This environment contains its own runtimes, libraries, and PATH to make sure everything within the sandbox stays exactly the same.
First we need to install virtualenv:
$ pip install virtualenv
pip is a package manager for Python. It lets you install packages and libraries, keep track of what you have installed, update them, and reinstall them as needed (ie. during deployment to a new environment).
Now let's create our workspace where we'll put our projects:
$ cd ~
$ mkdir -p workspace/raspberry/project1
$ cd workspace/raspberry/project1
mkdir -p creates a nested hierarchy of directories even if parent directories don't exist.
Finally, let's create the virtual environment and activate it:
$ mkdir ~/.venvs
$ venv ~/.venvs/project1
$ source ~/.venvs/bin/activate
venv ~/.venvs/project1 creates the "virtual environment" for the project.
Note: We're separating out the sandboxed virtual environment containing the runtimes and libraries from our actual source code directory. This simply means only our code exists in our project directory. Clean!
source bin/activate executes a shell script that sets up the environment PATH variables so Python and libraries point to the sandboxed environment.
Install some packages
$ pip install pyserial
pip install pyserial installs the PySerial serial port access library.
Any libraries installed via pip can be imported as normal:
SSH into your Debian machine and run the code:
$ cd /media/<workspace>/
$ source ~/.venv/project1/bin/activate
$ python main.py
Here's where it all comes together. Let's recap what we have:
- Virtual Machine emulating a Raspberry Pi environment (VirtualBox)
- Specific Python runtime environment (VirtualEnv)
- Tracked list of packages (pip)
- Source Code
Setting up the Deploy Target
Our VirtualEnv guarantees Python runtime consistency when we deploy our code somewhere else. All we have to do is exactly what we did above, create the environment on the target machine and everything is ready to go.
$ mkdir ~/.venv
$ venv ~/.venv/project1
$ mkdir -p ~/workspace/project1
$ source ~/.venv/project1/bin/activate
Pip allows us to quickly and consistently install the same libraries. From the virtual machine (via SSH) perform the following:
$ pip freeze > requirements.txt
pip freeze > requirements.txt writes the list of all installed packages in the sandboxed environment to requirements.txt
Copy the Code
Copy over the source code to the deployment target. For example, if you wanted to do this via the command line:
$ scp ~/workspace/project1 <user>@<deploy ip>:~/
scp, or secure copy, uses SSH to securely transfer files from one machine to another.
Now we can leverage the requirements.txt file we created to install the dependencies quickly with pip.
$ pip install -r requirements.txt
pip install -r requirements.txt installs a list of dependencies specified in the supplied file.