Is this really part 3?
Well… no. Not exactly. The title would have been about Netbox but as you might have guessed, I am not using Netbox anymore. We migrated to Nautobot from Netbox 2.10.4 since summer 2021, and have been happily using it and developing on it ever since.
After all this time, providing out of date code for pynetbox, does not seem to make much sense. I will soon try to do my best on providing some examples we used back then (I still have some parts of the code) and display equivalent for Nautobot. But this post will be about Nautobot. And believe me, we have a lot to talk about, without even scratching the top.
(If you are looking for part1 and part2 here they are: #1 & #2)
What took you so long?
I don’t know, I am a busy guy? Kidding a little, but it’s sort of true. You get excited about features provided by a new platform plus you have needs/requests/use cases that can’t wait, so you dive in, you make it for most and some you never intended to deal with, but then reality takes over and you need to rush to other things, so carving up that extra time for blogging is very difficult. Or maybe at first you feel you don’t have enough to post. But after a while you have so much that you fall in the first case: not enough time (or not enough people).
But I guess that goes for most of you as well, so nothing more to say than this, if you have been waiting up till now for part-3 on Netbox, I apologize.
If however you just stumbled upon this searching for what to do for a Network SSoT, that involves a great deal of automation capabilities, stay a while and listen (popular online action RPG reference).
Warning – TLDR section
There is no code in this post. Well almost no code (a couple of small shell scripts don’t really count as code). Why? The post is already too big. It would take me another 4-5 days to test and prepare the old code and almost half a day to include a couple of jobs I have written as examples.
This post is written to explain what the setup of Nautobot on Docker is like and why it’s like that, how to deploy using docker-compose and how to perform basic maintenance functions. I will write a following post soon with the rest of the info, but if you just want to get it up and running yourself, this post should contain all the info and references you need. If you feel there’s something missing and need more information, you can hit the NTC Nautobot slack or contact me directly on slac or on twitter. Up to you.
So .. again.. what happened?
Well.. we migrated. Migrating between such tools is not easy, we try to find the best possible moment where as many conditions as possible are in our favor at the same time.
In reality, unlike in the plot of a favorite movie where the heroes can go back in the past and try again, trying to hold on to a certain moment or set of conditions can only go on for so long. After a while, the timelines begin to pull in opposite directions. So in this case, where I stopped upgrading Netbox after 2.10.4, and went on investigating how to migrate to Nautobot 1.x, I held out as long as I could. After a certain point, I just had no choice but to take the plunge.
How was it?
Cold. Uncomfortable. The first thing that hit me was that the use of Docker with Nautobot was not exactly as straightforward like it was with Netbox, at least at that point in time. Let me explain..
Difference in Support Scheme
That was one of the good differences. With Netbox, if you wanted to set things up on Docker, you had to look in two different places (at least), as the team developing the Docker image was different than the team developing Netbox. So architectural decisions were made that perhaps made sense for the team deploying on docker, but not for the community using Netbox on Docker in production (personal take). For example, while Netbox on Linux (non docker) did include support and mention for Nginx as a front webserver, in the docker version that team did not support that after a certain point. Instead they suggested using Nginx Unit and offered no support for setting up https. That part – which you may recall from part2 – really put me off, I expressed myself eloquently in the slack, made some enemies but in the end some friends too (some even liked the post in part 2).
With Nautobot, the situation is not the same. It’s a single company behind everything, single place to look, single responsibility. This does makes things simpler, as development and decision making happen in parallel for all versions in the same direction. So better for the end user. Again, that’s what I think.
Difference in product maturity
It’s common ground for Docker to be referred to as ‘not for production‘ (even by Docker trainers). The reason behind such a reference is that usually people mean that Docker containers don’t scale with demand, which then people contrast with infrastructure that does scale, in this case Kubernetes (whatever flavor) or similar. But while this may be true for large scale Iaas/SaaS offerings, in an enterprise area where a few Network Engineers try to use an open source Network SSoT/Automation platform solution, hands are a scarce resource. If you don’t have enough people to learn about Automation and Programmability, then certainly you have even less that are willing to invest time and effort in Kubernetes for such a solution.
Where are you going with this and what does all this have to do with Nautobot maturity on Docker infrastructure ?
Just stay with me for a sec.
NTC is the real power behind Nautobot development. I am sure there are other contributors but they are the ones driving this. So this has a positive side to it but there are also other concerns. I will come back to this later in this post but basically, NTC had not invested as much time in polishing Nautobot on Docker as they had with Nautobot on Linux (e.g. installing as a python package on a linux server). Where use cases are concerned they mostly saw this as a lab use case, not a production one. So some things, although supported in full, were not presented as production ready. For example, setting up https, ldap and more stuff, all deployed together in the docker version with docker compose. It took some time for things to come to the point they are now. But like I said, I will come back to this later. Back to the differences between Netbox and Nautobot for deploying on Docker.
Difference in Architecture
The architecture suggested for Nautobot on Docker is a little different than Netbox’s architecture on Docker as it was back in 2.10.4 (I have no idea what they are doing now or even if that team that made the image still exists).
LDAP Support
To start with, there is no separate Nautobot image with embedded LDAP support (as it was with Netbox on Docker). If you want LDAP support, you need to create your own custom image, either from Nautobot’s base image or from scratch, doing what they do to make it (there are things to follow in Nautobot’s github repo).
Https support
As with Netbox on Docker (as it was), if you want https support, enabling it in the uWSGI web server is a bad idea: you have to front it with another web server or reverse proxy, like Nginx. You can use other things but why would you? Nginx is great and you can easily find what you want, F5 made sure not to strangle this project when they bought it. While Nginx is included in the documentation for deploying Nautobot on Linux, it wasn’t the case for Nautobot on Docker with docker-compose, not at the start anyway.
Additional python packages/plugin support
Again, while the deployment of plugins on Nautobot on Linux was included in the documentation, this wasn’t the case for Nautobot on Docker with docker-compose. That’s why it took me a while to start using Nautobot plugins on the Docker platform.
Since Nautobot is portrayed not only as an SSoT solution but also as an automation platform, using additional python packages and plugins or apps is a must if you want to exploit Nautobot’s true potential: You will need to do it sooner or later. So if it’s not clear how to deploy everything, if there are dark spots, then it can become a problem.
Additional packages and plugins can either be included in your custom image over the Nautobot base image or installed in the one you can make from scratch. It doesn’t sound very modular, I know. It’s a small price to pay though (to recreate the image for including them).
Why do you keep referring to docker-compose? Isn’t docker enough?
Docker-compose is a way to define every parameter you want as code, to configure how to set up a group of services based on containers. Doing so without docker-compose is possible but a lot more complicated (long docker run cli commands that you would soon have to group in bash shell scripts) . If you are just learning Docker, the docker cli commands are great for getting to know the docker infrastructure and doing things the quick and dirty way. Also it’s a very good way to verify and troubleshoot things but if you really want to do it IaC like, then I suggest you invest some time to learn how to use docker-compose. I am using it all over the place and will display things with it in this and possibly following posts. Here is a link about it in the Docker website and another on how to install it in the Digital Ocean website. Back to Nautobot’s container architecture.
Nautobot’s container line-up for docker-compose
There is an image for all-in-one for lab use. That’s not what I use on production. Nautobot requires a base container for Nautobot, additional containers as workers (one or more), a scheduler, a caching server and then there’s what you want to use for DB and Web Server. Those containers are defined as services in docker-compose and are equivalent to the services you would need to setup if you installed Nautobot on Linux (as per the Nautobot documentation). In my case, this is what I started with:
- 1 Nautobot container for the base Nautobot app.
- 1 Nautobot containers as worker (RQ – Request Queue)
- 1 Redis container as cache
- 1 Postgress container for the DB
- 1 Nginx container as front end Web Server.
The RQ worker was deprecated since Nautobot version 1.1.0. At the same time the Celery worker has been introduced and Celery Beat was also introduced as scheduler. After going through an intermittent stage using both types of workers, and recently consulting with Josh Vanderaa from NTC this is what I ended up with:
- 1 Nautobot container for the base Nautobot app.
- 1 Nautobot containers as worker (Celery)
- 1 Nautobot container with Celery Beat as scheduler
- 1 Redis container as cache
- 1 Postgress container for the DB
- 1 Nginx container as front end Web Server.
It IS possible to do things differently. I will only deal with what works for me, this isn’t a Nautobot encyclopedia.
So how did you proceed?
I had to decide whether I would try things on Docker or start with a VM based install and see how it goes from there. To try things our with the migration, I chose the second option.
In all cases, the Database is the same, whether it runs on a container or a vm as a service. So I followed what was described by NTC folks for the way to export data from Netbox and then used a special Nautobot plugin to do the migration. It wasn’t a straight line..
What went wrong?
With such an undertaking in development assumptions need to be made. In this case in the Netbox DB there were objects that didn’t hold unique names (for example unnamed devices that were in inventory status), so the importer code ran into problems. NTC engineers in the slack offered their support with this and after a while (a couple of weeks), all data was migrated. The plugin involved is still in GitHub in this repo, but I doubt it will be useful to anyone as Netbox and Nautobot versions have moved on.. Unless you have one of those Pym particles and Tony Stark’s time machine..
Nautobot no longer leaves devices unnamed. If a device is on-boarded a name/slug is generated. Otherwise you are prompted/forced to provide a name when adding the device yourself, either through the GUI or programmatically.
So you started with a virtual machine-based installation?
Yes, to start with. If you plan to do the same, this installation guide is very thorough. If you have a working and prepared linux server according to the guide for prerequisites, it shouldn’t take more than half an hour to get a working Nautobot system, along with any required additional python packages and Nautobot plugins. If you don’t have any experience with installing Nautobot and need to spend more time checking everything, I guess it can take a little more than that.
A VM based install can help with providing points of return in case mistakes or omissions are made in the process. This is possible through the use of snapshots. Explaining how virtualization works is beyond the scope of this article, but the idea is that when you create a ‘snapshot’ this corresponds to a file based image of the current state of the Virtual Machine. It’s also possible to include the memory state of the VM, always in file form. The files corresponding to the state are ‘frozen’ (locked) and only delta is kept. So it’s very easy to ‘restore’ the stored state, as the Virtualization Controller (Hypervisor) only needs to delete the ‘delta’. You need to be careful if taking a snapshot when a DB is actively being updated, because you risk DB corruption with a restore from that snapshot.
However, there are more things to consider here. First of all, the install process still takes time and requires manual intervention. Also it’s possible for a lot of things to go wrong during that process. For example in my case, an enterprise security software decided that a certain python package repository mirror was not as reputable as it should be, and decided to block it. Such a situation can lead to serious problems, such as package database corruption. Same thing could happen with an apt package mirror (a lot more dangerous).
One can of course automate the installation process using a tool like Ansible (there are also other alternatives). However that will not rule out other possible problems like the one described above.
So you prefer Docker then?
Yes. Docker provides a solution for both of these problems and is a lot faster as well. Using docker-compose to organize and standardize the environment can offer all the advantages of Infrastructure as Code. I studied what I had been doing with Netbox and then checked the implementation of docker-compose included in the nautobot github repository, in the development folder.
But it wasn’t enough. So I started inquiring about it in the Nautobot channel in NTC slack. Josh Vanderaa was kind enough to contact me and say he was actively developing a proposed docker-compose setup that should be published in an NTC-Nautobot related repo, but was not quite ready yet. So we talked and with his contribution I ended up creating my first setup for Nautobot on Docker using docker-compose, in the early form I described before and provided feedback. There was nothing similar documented anywhere up to that point. Eventually Josh managed to publish that repo later on. You can find it here.
After playing with it, I realized that for LDAP and https support and additional python packages, I needed some extra work on this. First of all, it was necessary to create my own image. Josh provides an example for LDAP over here and for additional plugins over here. I did however run into problems with jobs and plugins, but I will cover that later. My own setup seems like a mix and match from his examples and my own experience.
The setup for LDAP and HTTPS
It’s time to detail my own setup covering what I needed. First of all I set the goal of supporting both LDAP and HTTPS. To support LDAP, one must install specific apt packages (the Nautobot base docker hub image is based on Ubuntu/Python). For HTTPS it’s also necessary to create the CSR and certificate files and copy over the certificate file in the image (or mount it in the docker-compose). First I will display the necessary files and then I will explain why certain things are the way they are.
docker-compose.yml
---
version: "3.8"
services:
nautobot:
image: "noc/nautobot-ldap:latest"
build:
args:
PYTHON_VER: "${PYTHON_VER:-3.10}"
context: "./"
dockerfile: "Dockerfile-LDAP"
target: "final"
env_file:
- "./local.env"
restart: "unless-stopped"
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- "./nautobot_config.py:/opt/nautobot/nautobot_config.py"
depends_on:
- "postgres"
- "redis"
celery_worker:
build:
args:
PYTHON_VER: "${PYTHON_VER:-3.10}"
context: "./"
dockerfile: "Dockerfile-LDAP"
target: "final"
image: "noc/nautobot-ldap:latest"
#entrypoint: "nautobot-server celery worker -l $$NAUTOBOT_LOG_LEVEL"
entrypoint: "sh -c 'nautobot-server celery worker -l $$NAUTOBOT_LOG_LEVEL'"
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- "./nautobot_config.py:/opt/nautobot/nautobot_config.py"
healthcheck:
interval: "30s"
timeout: "10s"
start_period: "30s"
retries: 3
test: ["CMD", "bash", "-c", "nautobot-server celery inspect ping --destination celery@$$HOSTNAME"] ## $$ because of docker-compose
depends_on:
- "nautobot"
- "redis"
env_file:
- "./local.env"
tty: true
celery_beat:
build:
args:
PYTHON_VER: "${PYTHON_VER:-3.10}"
context: "./"
dockerfile: "Dockerfile-LDAP"
target: "final"
entrypoint: "sh -c 'nautobot-server celery beat -l $$NAUTOBOT_LOG_LEVEL'"
image: "noc/nautobot-ldap:latest"
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- "./nautobot_config.py:/opt/nautobot/nautobot_config.py"
healthcheck:
interval: "5s"
timeout: "5s"
start_period: "5s"
retries: 3
test: ["CMD", "nautobot-server", "health_check"]
depends_on:
- "nautobot"
- "redis"
env_file:
- "./local.env"
tty: true
redis:
image: "redis:alpine"
env_file:
- "./local.env"
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
command:
- "sh"
- "-c" # this is to evaluate the $REDIS_PASSWORD from the env
- "redis-server --appendonly yes --requirepass $$NAUTOBOT_REDIS_PASSWORD" ## $$ because of docker-compose
restart: "unless-stopped"
postgres:
image: "postgres:14"
env_file:
- "./local.env"
volumes:
- "postgres_data:/var/lib/postgresql/data"
# - /etc/timezone:/etc/timezone:ro
# - /etc/localtime:/etc/localtime:ro
restart: "unless-stopped"
nginx:
image: nginx:1.21.1-alpine
depends_on:
- "nautobot"
ports:
- "443:443"
volumes:
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
- ./nginx-default.conf:/etc/nginx/conf.d/default.conf:ro
- ./ssl/server-name.cer:/etc/ssl/certs/nginx.crt:ro
- ./ssl/server-name.nopasswd.key.pem:/etc/ssl/private/nginx.key.pem:ro
volumes:
postgres_data:
Dockerfile-LDAP
ARG PYTHON_VER=3.10
ARG NAUTOBOT_VERSION=1.5.6
FROM networktocode/nautobot:${NAUTOBOT_VERSION}-py${PYTHON_VER} as base
USER 0
RUN apt-get update -y && apt-get install -y libldap2-dev libsasl2-dev libssl-dev ca-certificates
# ---------------------------------
# Stage: Builder
# ---------------------------------
FROM base as builder
RUN apt-get install -y gcc && \
apt-get autoremove -y && \
apt-get clean all && \
rm -rf /var/lib/apt/lists/*
RUN pip3 install --upgrade pip wheel && pip3 install django-auth-ldap
# ---------------------------------
# Stage: Final
# ---------------------------------
FROM base as final
ARG PYTHON_VER
USER 0
COPY --from=builder /usr/local/lib/python${PYTHON_VER}/site-packages /usr/local/lib/python${PYTHON_VER}/site-packages
COPY --from=builder /usr/local/bin /usr/local/bin
COPY --chown=nautobot:nautobot ./nautobot_config.py /opt/nautobot/nautobot_config.py
COPY ./ssl/ca/root.crt /usr/local/share/ca-certificates/
COPY ./ssl/ca/interm.crt /usr/local/share/ca-certificates/
RUN update-ca-certificates
USER nautobot
WORKDIR /opt/nautobot
COPY --chown=nautobot:nautobot ./local_requirements.txt /opt/nautobot/
RUN pip install --no-warn-script-location -r /opt/nautobot/local_requirements.txt
COPY --chown=nautobot:nautobot ./plugin_requirements.txt /opt/nautobot/
RUN pip install --no-warn-script-location -r /opt/nautobot/plugin_requirements.txt
Directory Structure
This is what the Directory and Files structure looks like:
.
├── Dockerfile-LDAP
├── docker-compose.yml
├── exportsql.sh
├── importsql.sh
├── jobs
│ └── __init__.py
├── local.env
├── local_requirements.txt
├── nautobot.sql
├── nautobot_config.py
├── nginx-default.conf
├── plugin_requirements.txt
└── ssl
├── ca
│ ├── ca-chained.crt
│ ├── interm.crt
│ └── root.crt
├── server-name.crt
└── server-name.key
I have a lot of questions about these..
I can imagine. I haven’t touched but very little so far. Let me try and guess some of your questions:
- Why are you copying over the nautobot_config.py file in the dockerfile and then mounting it in docker-compose?
- Why do you mount the nautobot_config.py file in three different services?
- Jobs Dir? And it’s almost empty? What’s this?
- Why are you running update-ca-certificates in the dockerfile?
- What is contained in local.env?
- What is contained in the rest of the files (nautobot_config.py and nginx.conf)
- Why are you changing users in the dockerfile?
- What are exportsql.sh and importsql.sh for and what is contained in them?
- What is the nautobot.sql file for? Does it have anything to do with the DB?
It’s possible you have these questions in mind and maybe more. I was driven to to this exact setup by discoveries, help and decisions made along the way, while trying to solve every problem I came across. It’s going to take a while to address everything in a blog post. Let’s first give a summary of how things are setup. Then I will show you exactly how to prepare the system for Docker and how to run Nautobot after that.
Logging into Nautobot using LDAP or Active Directory
Provided you have installed the necessary packages to support LDAP, adding support to Nautobot for logging in using Microsoft Active Directory credentials is just about providing the necessary settings in the Nautobot configuration. The Nautobot documentation provides some guidance for it here (it’s coming back from the Netbox days, as it’s basically about Django LDAP support and mirroring MS-AD groups to access rights for Nautobot objects and functions. You will need to explore how your Active Directory implementation fits into the picture:
- define credentials to do LDAP querries,
- define where (which OU) to search for users and groups,
- define groups for accessing Nautobot and assigning admin and operator roles, etc).
You will catch a glimpse of my own setup in the next section, as the LDAP settings have to be included in the Nautobot configuration file.
There are also multiple ways to try things out with your MS/LDAP server and check what is supported or if your queries are done correctly. You can use software at a windows machine like Softera LDAP Browser or plain old ldapsearch at your Ubuntu server. Ldapsearch can be installed as an apt package (it’s included in ldap_utils, see here for installation) so if you want to add it to your docker image, you certainly can. I personally see no point in this as your basic testing should be done by the time you build your image. But that’s up to you.
Nautobot configuration and environment variables
Nautobot_config.py is the basic configuration file for Nautobot. I guess it’s part of the Django related structure in Nautobot (as it pretty much was in Netbox, only I think there were some additional files to touch when I was using Netbox). The syntax in this file is python syntax, so those of you who know a bit of Python will be very comfortable with it. The Nautobot documentation tells you how to go about creating and editing it here. One thing to note is how you can create a default config (this works after completing the basic setup of Nautobot in a vm):
nautobot-server init
If you take a look inside you will see a lot of assignments, like:
ALLOWED_HOSTS = os.getenv("NAUTOBOT_ALLOWED_HOSTS", "").split(" ")
That type of assignment tries a to get a value from an environment variable if such a variable is defined. If it’s not defined, the second value after the comma is used. Don’t mind the split method, that assignment assumes you will get a string which will be transformed into a list of strings (the allowed hosts list) using the space character as a separator. Like I said, python.
So where are environment variables defined for Docker containers and docker-compose?
You can define environment variables with the -e option when using docker run commands to launch containers from the CLI or under “environment:” key in docker-compose.yml where you can define individual environment variables, or the “env_file:” key at the same file, where you can declare a file that contains any environment variables, for a example “env_file: local.env”. Yes, that’s right, that is what it’s for!
By using a file for defining the environment variables you make sure they are defined for all the services you need them to be, just by including the key for the file under each one of them, but you only need to write them once (in the file). That will make sure they are available inside the containers.
However that doesn’t mean Nautobot containers can use them, unless you reference them in the nautobot_config.py file. It’s common for tools that are deployed on Docker to mount the config files as volume entries. The installation process has already been done in the process of creating the image so when the container is created, it’s ready to run! Naturally this means that the configuration files are already in their final state as they are mounted when the container is launched. That can seem a little confusing for people not accustomed to using Docker for deploying such infrastructure. You need the config files to deploy the containers but how can you create the files if the containers are not running?
For each piece of software the way can be different.
- Sometimes there’s a default config included in the container, all you have to do is run the container without mounting anything and copy the config from the container using ‘docker cp‘ in order to modify it to your liking and then mount it. Remember that modifying the config inside the container without an external point of reference is no good, as if you destroy the container, the modifications you made are gone.
- Sometimes there’s a command you can run that will generate a config file you can work on, just like the one I mentioned earlier.
- The rest of the time you can probably find a way to download a default config file, for example from a github repository where the tool code is deployed.
Generating a default config is useful for more things than doing an initial installation. Each time the tool is updated, you will probably need to look for changes in the config. With Nautobot I use the first way and I always compare the file for the new version with the one I am currently using for the older version.
What’s in the files?
Here is the local.env file:
# ---------------------
# PYTHON_VER is used for which version of Python to use. Check hub.docker.com for the available versions
# ---------------------
PYTHON_VER=3.10
# This should be limited to the hosts that are going to be the web app.
# https://docs.djangoproject.com/en/3.2/ref/settings/#allowed-hosts
NAUTOBOT_ALLOWED_HOSTS=*
NAUTOBOT_CHANGELOG_RETENTION=0
NAUTOBOT_CONFIG=/opt/nautobot/nautobot_config.py
#
# THIS SHOULD BE CHANGED! These are the settings for the database
#
NAUTOBOT_DB_HOST=postgres
NAUTOBOT_DB_NAME=nautobot
NAUTOBOT_DB_PASSWORD=dbpassword
NAUTOBOT_DB_USER=nautobot
#NAUTOBOT_DB_TIMEOUT=300
#NAUTOBOT_DB_ENGINE=django.db.backends.postgresql
NAUTOBOT_MAX_PAGE_SIZE=0
NAUTOBOT_NAPALM_TIMEOUT=5
# NAUTOBOT REDIS SETTINGS
# When updating NAUTOBOT_REDIS_PASSWORD, make sure to update the password in
# the NAUTOBOT_CACHEOPS_REDIS line as well!
#
NAUTOBOT_REDIS_HOST=redis
NAUTOBOT_REDIS_PASSWORD=cachepassword
NAUTOBOT_CACHEOPS_REDIS=redis://:oracle97@redis:6379/0
NAUTOBOT_REDIS_PORT=6379
# Uncomment REDIS_SSL if using SSL
# NAUTOBOT_REDIS_SSL=True
# It is required if you export the database to other locations
NAUTOBOT_SECRET_KEY='$f9$h*4ia6wj$*h@sjc5-1n2)5sjwavo+)@e$*^2os9xn17xbn'
# Needed for Postgres should match the values for Nautobot above, this is used in the Postgres container
PGPASSWORD=dbpassword
POSTGRES_DB=nautobot
POSTGRES_PASSWORD=dbpassword
POSTGRES_USER=nautobot
NAUTOBOT_HIDE_RESTRICTED_UI=False
# Set Super User Credentials
NAUTOBOT_CREATE_SUPERUSER=true
NAUTOBOT_SUPERUSER_NAME=admin
NAUTOBOT_SUPERUSER_EMAIL=admin@example.com
NAUTOBOT_SUPERUSER_PASSWORD=adminpassword
NAUTOBOT_SUPERUSER_API_TOKEN=0123456789abcdef
NAUTOBOT_LOG_LEVEL=INFO
#napalm
NAUTOBOT_NAPALM_USERNAME = 'nwdeviceuserfornapalm'
NAUTOBOT_NAPALM_PASSWORD = 'nwdevicenapalmpassword'
NAUTOBOT_NAPALM_SECRET = 'nwdevicenapalmenablepassword'
NAUTOBOT_PREFER_IPV4 = True
# Remote authentication support
REMOTE_AUTH_ENABLED = True
REMOTE_AUTH_BACKEND = 'django_auth_ldap.backend.LDAPBackend'
NAUTOBOT_TIME_ZONE = "Europe/Athens"
NAUTOBOT_SHORT_DATE_FORMAT = "d-m-Y"
NAUTOBOT_SHORT_DATETIME_FORMAT = "d-m-Y H:i"
# Optionally display a persistent banner at the top and/or bottom of every page. HTML is allowed. To display the same
# content in both banners, define BANNER_TOP and set BANNER_BOTTOM = BANNER_TOP.
NAUTOBOT_BANNER_TOP = 'This is the Bank of Greece Prod Nautobot System'
NAUTOBOT_BANNER_BOTTOM = 'Databases are synced overnight with DR system'
NAUTOBOT_BANNER_LOGIN = 'Enter at your own risk, here there be Dragons!'
NAUTOBOT_PRIME_USERNAME = 'ciscoprimeapiuser'
NAUTOBOT_PRIME_PASSWORD = 'ciscoprimeapipassword'
NAUTOBOT_PRIME_API_ADDRESS = 'x.y.z.w' #the ip address for Cisco Prime Infrastructure
NAUTOBOT_GITLAB_USERNAME = 'oauth2'
NAUTOBOT_GITLAB_TOKEN = 'thesecretdefinedforaccesstoGitlab'
NAUTOBOT_DEVICE_SECRET = 'nwsecretdefinedforenablesecretinnautobot'
#GIT_SSL_NO_VERIFY="1" #used only if there is no trust for the certificate used for gitlab
I bet the contents created even more questions. Let’s add the contents of nautobot_config.py as well, and then see how many questions you have. You will find that a lot of text is commented out. This has happened over time as Nautobot developers decided to insert default values for the environment variables in the code, so that saved a lot of typing. It’s a large piece of text so be patient.
import os
import sys
from nautobot.core.settings import * # noqa F401,F403
from nautobot.core.settings_funcs import is_truthy, parse_redis_connection
#########################
# #
# Required settings #
# #
#########################
# This is a list of valid fully-qualified domain names (FQDNs) for the Nautobot server. Nautobot will not permit write
# access to the server via any other hostnames. The first FQDN in the list will be treated as the preferred name.
#
# Example: ALLOWED_HOSTS = ['nautobot.example.com', 'nautobot.internal.local']
#
# ALLOWED_HOSTS = os.getenv("NAUTOBOT_ALLOWED_HOSTS", "").split(" ")
# The django-redis cache is used to establish concurrent locks using Redis. The
# django-rq settings will use the same instance/database by default.
#
# CACHES = {
# "default": {
# "BACKEND": "django_redis.cache.RedisCache",
# "LOCATION": parse_redis_connection(redis_database=0),
# "TIMEOUT": 300,
# "OPTIONS": {
# "CLIENT_CLASS": "django_redis.client.DefaultClient",
# "PASSWORD": "",
# },
# }
# }
# Redis connection to use for caching.
#
# CACHEOPS_REDIS = os.getenv("NAUTOBOT_CACHEOPS_REDIS", parse_redis_connection(redis_database=1))
# Celery broker URL used to tell workers where queues are located
#
# CELERY_BROKER_URL = os.getenv("NAUTOBOT_CELERY_BROKER_URL", parse_redis_connection(redis_database=0))
# Celery results backend URL to tell workers where to publish task results
#
# CELERY_RESULT_BACKEND = os.getenv("NAUTOBOT_CELERY_RESULT_BACKEND", parse_redis_connection(redis_database=0))
# Database configuration. See the Django documentation for a complete list of available parameters:
# https://docs.djangoproject.com/en/stable/ref/settings/#databases
#
# DATABASES = {
# "default": {
# "NAME": os.getenv("NAUTOBOT_DB_NAME", "nautobot"), # Database name
# "USER": os.getenv("NAUTOBOT_DB_USER", ""), # Database username
# "PASSWORD": os.getenv("NAUTOBOT_DB_PASSWORD", ""), # Database password
# "HOST": os.getenv("NAUTOBOT_DB_HOST", "localhost"), # Database server
# "PORT": os.getenv("NAUTOBOT_DB_PORT", ""), # Database port (leave blank for default)
# "CONN_MAX_AGE": int(os.getenv("NAUTOBOT_DB_TIMEOUT", "300")), # Database timeout
# "ENGINE": os.getenv(
# "NAUTOBOT_DB_ENGINE", "django.db.backends.postgresql"
# ), # Database driver ("mysql" or "postgresql")
# }
# }
# Ensure proper Unicode handling for MySQL
#
if DATABASES["default"]["ENGINE"] == "django.db.backends.mysql":
DATABASES["default"]["OPTIONS"] = {"charset": "utf8mb4"}
# These defaults utilize the Django caches setting defined for django-redis.
# See: https://github.com/rq/django-rq#support-for-django-redis-and-django-redis-cache
#
# RQ_QUEUES = {
# "default": {
# "USE_REDIS_CACHE": "default",
# },
# "check_releases": {
# "USE_REDIS_CACHE": "default",
# },
# "custom_fields": {
# "USE_REDIS_CACHE": "default",
# },
# "webhooks": {
# "USE_REDIS_CACHE": "default",
# },
# }
# This key is used for secure generation of random numbers and strings. It must never be exposed outside of this file.
# For optimal security, SECRET_KEY should be at least 50 characters in length and contain a mix of letters, numbers, and
# symbols. Nautobot will not run without this defined. For more information, see
# https://docs.djangoproject.com/en/stable/ref/settings/#std:setting-SECRET_KEY
SECRET_KEY = os.getenv("NAUTOBOT_SECRET_KEY", "nautobot_secret_key")
#####################################
# #
# Optional Django core settings #
# #
#####################################
# Specify one or more name and email address tuples representing Nautobot administrators.
# These people will be notified of application errors (assuming correct email settings are provided).
#
# ADMINS = [
# ['John Doe', 'jdoe@example.com'],
# ]
# FQDNs that are considered trusted origins for secure, cross-domain, requests such as HTTPS POST.
# If running Nautobot under a single domain, you may not need to set this variable;
# if running on multiple domains, you *may* need to set this variable to more or less the same as ALLOWED_HOSTS above.
# https://docs.djangoproject.com/en/stable/ref/settings/#csrf-trusted-origins
#
# CSRF_TRUSTED_ORIGINS = []
# Date/time formatting. See the following link for supported formats:
# https://docs.djangoproject.com/en/stable/ref/templates/builtins/#date
#
# DATE_FORMAT = os.getenv("NAUTOBOT_DATE_FORMAT", "N j, Y")
# SHORT_DATE_FORMAT = os.getenv("NAUTOBOT_SHORT_DATE_FORMAT", "Y-m-d")
# TIME_FORMAT = os.getenv("NAUTOBOT_TIME_FORMAT", "g:i a")
# SHORT_TIME_FORMAT = os.getenv("NAUTOBOT_SHORT_TIME_FORMAT", "H:i:s")
# DATETIME_FORMAT = os.getenv("NAUTOBOT_DATETIME_FORMAT", "N j, Y g:i a")
# SHORT_DATETIME_FORMAT = os.getenv("NAUTOBOT_SHORT_DATETIME_FORMAT", "Y-m-d H:i")
# Set to True to enable server debugging. WARNING: Debugging introduces a substantial performance penalty and may reveal
# sensitive information about your installation. Only enable debugging while performing testing. Never enable debugging
# on a production system.
#
# DEBUG = is_truthy(os.getenv("NAUTOBOT_DEBUG", "False"))
# If hosting Nautobot in a subdirectory, you must set this value to match the base URL prefix configured in your
# HTTP server (e.g. `/nautobot/`). When not set, URLs will default to being prefixed by `/`.
#
# FORCE_SCRIPT_NAME = None
# IP addresses recognized as internal to the system.
#
# INTERNAL_IPS = ("127.0.0.1", "::1")
# Enable custom logging. Please see the Django documentation for detailed guidance on configuring custom logs:
# https://docs.djangoproject.com/en/stable/topics/logging/
#
# LOGGING = {
# "version": 1,
# "disable_existing_loggers": False,
# "formatters": {
# "normal": {
# "format": "%(asctime)s.%(msecs)03d %(levelname)-7s %(name)s :\n %(message)s",
# "datefmt": "%H:%M:%S",
# },
# "verbose": {
# "format": "%(asctime)s.%(msecs)03d %(levelname)-7s %(name)-20s %(filename)-15s %(funcName)30s() :\n %(message)s",
# "datefmt": "%H:%M:%S",
# },
# },
# "handlers": {
# "normal_console": {
# "level": "INFO",
# "class": "logging.StreamHandler",
# "formatter": "normal",
# },
# "verbose_console": {
# "level": "DEBUG",
# "class": "logging.StreamHandler",
# "formatter": "verbose",
# },
# },
# "loggers": {
# "django": {"handlers": ["normal_console"], "level": "INFO"},
# "nautobot": {
# "handlers": ["verbose_console" if DEBUG else "normal_console"],
# "level": "DEBUG" if DEBUG else "INFO",
# },
# },
# }
# The file path where uploaded media such as image attachments are stored. A trailing slash is not needed.
#
# MEDIA_ROOT = os.path.join(NAUTOBOT_ROOT, "media").rstrip("/")
# The length of time (in seconds) for which a user will remain logged into the web UI before being prompted to
# re-authenticate. (Default: 1209600 [14 days])
#
# SESSION_COOKIE_AGE = int(os.getenv("NAUTOBOT_SESSION_COOKIE_AGE", "1209600")) # 2 weeks, in seconds
# Where Nautobot stores user session data.
#
# SESSION_ENGINE = "django.contrib.sessions.backends.db"
# By default, Nautobot will store session data in the database. Alternatively, a file path can be specified here to use
# local file storage instead. (This can be useful for enabling authentication on a standby instance with read-only
# database access.) Note that the user as which Nautobot runs must have read and write permissions to this path.
#
# SESSION_FILE_PATH = os.getenv("NAUTOBOT_SESSION_FILE_PATH", None)
# Where static files (CSS, JavaScript, etc.) are stored
#
# STATIC_ROOT = os.path.join(NAUTOBOT_ROOT, "static")
# Time zone (default: UTC)
#
# TIME_ZONE = os.getenv("NAUTOBOT_TIME_ZONE", "UTC")
###################################################################
# #
# Optional settings specific to Nautobot and its related apps #
# #
###################################################################
# URL schemes that are allowed within links in Nautobot
#
# ALLOWED_URL_SCHEMES = (
# "file",
# "ftp",
# "ftps",
# "http",
# "https",
# "irc",
# "mailto",
# "sftp",
# "ssh",
# "tel",
# "telnet",
# "tftp",
# "vnc",
# "xmpp",
# )
# Banners (HTML is permitted) to display at the top and/or bottom of all Nautobot pages, and on the login page itself.
#
# BANNER_BOTTOM = ""
# BANNER_LOGIN = ""
# BANNER_TOP = ""
# Branding logo locations. The logo takes the place of the Nautobot logo in the top right of the nav bar.
# The filepath should be relative to the `MEDIA_ROOT`.
#
# BRANDING_FILEPATHS = {
# "logo": os.getenv("NAUTOBOT_BRANDING_FILEPATHS_LOGO", None), # Navbar logo
# "favicon": os.getenv("NAUTOBOT_BRANDING_FILEPATHS_FAVICON", None), # Browser favicon
# "icon_16": os.getenv("NAUTOBOT_BRANDING_FILEPATHS_ICON_16", None), # 16x16px icon
# "icon_32": os.getenv("NAUTOBOT_BRANDING_FILEPATHS_ICON_32", None), # 32x32px icon
# "icon_180": os.getenv(
# "NAUTOBOT_BRANDING_FILEPATHS_ICON_180", None
# ), # 180x180px icon - used for the apple-touch-icon header
# "icon_192": os.getenv("NAUTOBOT_BRANDING_FILEPATHS_ICON_192", None), # 192x192px icon
# "icon_mask": os.getenv(
# "NAUTOBOT_BRANDING_FILEPATHS_ICON_MASK", None
# ), # mono-chrome icon used for the mask-icon header
# }
# Prepended to CSV, YAML and export template filenames (i.e. `nautobot_device.yml`)
#
# BRANDING_PREPENDED_FILENAME = os.getenv("NAUTOBOT_BRANDING_PREPENDED_FILENAME", "nautobot_")
# Title to use in place of "Nautobot"
#
# BRANDING_TITLE = os.getenv("NAUTOBOT_BRANDING_TITLE", "Nautobot")
# Branding URLs (links in the bottom right of the footer)
#
# BRANDING_URLS = {
# "code": os.getenv("NAUTOBOT_BRANDING_URLS_CODE", "https://github.com/nautobot/nautobot"),
# "docs": os.getenv("NAUTOBOT_BRANDING_URLS_DOCS", None),
# "help": os.getenv("NAUTOBOT_BRANDING_URLS_HELP", "https://github.com/nautobot/nautobot/wiki"),
# }
# Cache timeout in seconds. Cannot be 0. Defaults to 900 (15 minutes). To disable caching, set CACHEOPS_ENABLED to False
#
# CACHEOPS_DEFAULTS = {"timeout": int(os.getenv("NAUTOBOT_CACHEOPS_TIMEOUT", "900"))}
# Set to True to enable caching with cacheops. (Default: False)
#
# CACHEOPS_ENABLED = is_truthy(os.getenv("NAUTOBOT_CACHEOPS_ENABLED", "False"))
# Set to True to enable periodic health checks for the Redis server connection used by cacheops.
#
# CACHEOPS_HEALTH_CHECK_ENABLED = False
# Options to pass to the Celery broker transport, for example when using Celery with Redis Sentinel.
#
# CELERY_BROKER_TRANSPORT_OPTIONS = {}
# Options to pass to the Celery result backend transport, for example when using Celery with Redis Sentinel.
#
# CELERY_RESULT_BACKEND_TRANSPORT_OPTIONS = {}
# Default celery queue name that will be used by workers and tasks if no queue is specified
# CELERY_TASK_DEFAULT_QUEUE = os.getenv("NAUTOBOT_CELERY_TASK_DEFAULT_QUEUE", "default")
# Global task time limits (seconds)
# Exceeding the soft limit will result in a SoftTimeLimitExceeded exception,
# while exceeding the hard limit will result in a SIGKILL.
#
# CELERY_TASK_SOFT_TIME_LIMIT = int(os.getenv("NAUTOBOT_CELERY_TASK_SOFT_TIME_LIMIT", str(5 * 60)))
# CELERY_TASK_TIME_LIMIT = int(os.getenv("NAUTOBOT_CELERY_TASK_TIME_LIMIT", str(10 * 60)))
# Number of days to retain changelog entries. Set to 0 to retain changes indefinitely.
#
# CHANGELOG_RETENTION = 90
# If True, all origins will be allowed. Other settings restricting allowed origins will be ignored.
# Defaults to False. Setting this to True can be dangerous, as it allows any website to make
# cross-origin requests to yours. Generally you'll want to restrict the list of allowed origins with
# CORS_ALLOWED_ORIGINS or CORS_ALLOWED_ORIGIN_REGEXES.
#
# CORS_ALLOW_ALL_ORIGINS = is_truthy(os.getenv("NAUTOBOT_CORS_ALLOW_ALL_ORIGINS", "False"))
# A list of origins that are authorized to make cross-site HTTP requests. Defaults to [].
#
# CORS_ALLOWED_ORIGINS = [
# 'https://hostname.example.com',
# ]
# A list of strings representing regexes that match Origins that are authorized to make cross-site
# HTTP requests. Defaults to [].
#
# CORS_ALLOWED_ORIGIN_REGEXES = [
# r'^(https?://)?(\w+\.)?example\.com$',
# ]
# Set to True to disable rendering of the IP prefix hierarchy in the the IPAM prefix list view.
# Useful in case of poor performance when rendering this page.
#
# DISABLE_PREFIX_LIST_HIERARCHY = False
# Enforcement of unique IP space can be toggled on a per-VRF basis. To enforce unique IP space
# within the global table (all prefixes and IP addresses not assigned to a VRF), set ENFORCE_GLOBAL_UNIQUE to True.
#
# ENFORCE_GLOBAL_UNIQUE = is_truthy(os.getenv("NAUTOBOT_ENFORCE_GLOBAL_UNIQUE", "False"))
# Exempt certain models from the enforcement of view permissions. Models listed here will be viewable by all users and
# by anonymous users. List models in the form `<app>.<model>`. Add '*' to this list to exempt all models.
# Defaults to [].
#
# EXEMPT_VIEW_PERMISSIONS = [
# 'dcim.site',
# 'dcim.region',
# 'ipam.prefix',
# ]
# Global 3rd-party authentication settings
#
# EXTERNAL_AUTH_DEFAULT_GROUPS = []
# EXTERNAL_AUTH_DEFAULT_PERMISSIONS = {}
# Directory where cloned Git repositories will be stored.
#
# GIT_ROOT = os.getenv("NAUTOBOT_GIT_ROOT", os.path.join(NAUTOBOT_ROOT, "git").rstrip("/"))
# Prefixes to use for custom fields, relationships, and computed fields in GraphQL representation of data.
#
# GRAPHQL_COMPUTED_FIELD_PREFIX = "cpf"
# GRAPHQL_CUSTOM_FIELD_PREFIX = "cf"
# GRAPHQL_RELATIONSHIP_PREFIX = "rel"
# Set to True to hide rather than disabling UI elements that a user doesn't have permission to access.
#
# HIDE_RESTRICTED_UI = False
# HTTP proxies Nautobot should use when sending outbound HTTP requests (e.g. for webhooks).
#
# HTTP_PROXIES = {
# 'http': 'http://10.10.1.10:3128',
# 'https': 'http://10.10.1.10:1080',
# }
# Directory where Jobs can be discovered.
#
# JOBS_ROOT = os.getenv("NAUTOBOT_JOBS_ROOT", os.path.join(NAUTOBOT_ROOT, "jobs").rstrip("/"))
# Log Nautobot deprecation warnings. Note that this setting is ignored (deprecation logs always enabled) if DEBUG = True
#
# LOG_DEPRECATION_WARNINGS = is_truthy(os.getenv("NAUTOBOT_LOG_DEPRECATION_WARNINGS", "False"))
# Setting this to True will display a "maintenance mode" banner at the top of every page.
#
# MAINTENANCE_MODE = is_truthy(os.getenv("NAUTOBOT_MAINTENANCE_MODE", "False"))
# Maximum number of objects that the UI and API will retrieve in a single request.
#
# MAX_PAGE_SIZE = 1000
# Expose Prometheus monitoring metrics at the HTTP endpoint '/metrics'
#
# METRICS_ENABLED = is_truthy(os.getenv("NAUTOBOT_METRICS_ENABLED", "False"))
# Credentials that Nautobot will uses to authenticate to devices when connecting via NAPALM.
#
# NAPALM_USERNAME = os.getenv("NAUTOBOT_NAPALM_USERNAME", "")
# NAPALM_PASSWORD = os.getenv("NAUTOBOT_NAPALM_PASSWORD", "")
NAPALM_SECRET = os.getenv("NAUTOBOT_NAPALM_SECRET", "")
DEVICE_SECRET = os.getenv("NAUTOBOT_DEVICE_SECRET", "")
# NAPALM timeout (in seconds). (Default: 30)
#
# NAPALM_TIMEOUT = int(os.getenv("NAUTOBOT_NAPALM_TIMEOUT", "30"))
# NAPALM optional arguments (see https://napalm.readthedocs.io/en/latest/support/#optional-arguments). Arguments must
# be provided as a dictionary.
NAPALM_ARGS = {
"secret" : DEVICE_SECRET,
}
# Default number of objects to display per page of the UI and REST API.
#
# PAGINATE_COUNT = 50
# Options given in the web UI for the number of objects to display per page.
#
# PER_PAGE_DEFAULTS = [25, 50, 100, 250, 500, 1000]
# Enable installed plugins. Add the name of each plugin to the list.
#
PLUGINS = [
"nautobot_device_onboarding",
"nautobot_device_lifecycle_mgmt",
"nautobot_bgp_models",
"nautobot_plugin_nornir",
"nautobot_golden_config"
]
# Plugins configuration settings. These settings are used by various plugins that the user may have installed.
# Each key in the dictionary is the name of an installed plugin and its value is a dictionary of settings.
#
# PLUGINS_CONFIG = {
# 'my_plugin': {
# 'foo': 'bar',
# 'buzz': 'bazz'
# }
# }
PLUGINS_CONFIG = {
"nautobot_device_lifecycle_mgmt": {
"barchart_bar_width": float(os.environ.get("BARCHART_BAR_WIDTH", 0.1)),
"barchart_width": int(os.environ.get("BARCHART_WIDTH", 12)),
"barchart_height": int(os.environ.get("BARCHART_HEIGHT", 5)),
},
"nautobot_bgp_models": {
"default_statuses": {
"AutonomousSystem": ["active", "available", "planned"],
"Peering": ["active", "decommissioned", "deprovisioning", "offline", "planned", "provisioning"],
}
},
"nautobot_plugin_nornir": {
"use_config_context": {"secrets": False, "connection_options": True},
# Optionally set global connection options.
"connection_options": {
"napalm": {
"extras": {
"optional_args": {"global_delay_factor": 1},
},
},
"netmiko": {
"extras": {
"global_delay_factor": 1,
},
},
},
"nornir_settings": {
"credentials": "nautobot_plugin_nornir.plugins.credentials.env_vars.CredentialsEnvVars",
"runner": {
"plugin": "threaded",
"options": {
"num_workers": 20,
},
},
},
},
"nautobot_golden_config": {
"per_feature_bar_width": 0.15,
"per_feature_width": 13,
"per_feature_height": 4,
"enable_backup": True,
"enable_compliance": True,
"enable_intended": True,
"enable_sotagg": True,
"sot_agg_transposer": None,
"platform_slug_map": None,
# "get_custom_compliance": "my.custom_compliance.func"
},
}
# Prefer IPv6 addresses or IPv4 addresses in selecting a device's primary IP address?
#
# PREFER_IPV4 = False
# Default height and width in pixels of a single rack unit in rendered rack elevations.
#
# RACK_ELEVATION_DEFAULT_UNIT_HEIGHT = 22
# RACK_ELEVATION_DEFAULT_UNIT_WIDTH = 220
# Sets an age out timer of redis lock. This is NOT implicitly applied to locks, must be added
# to a lock creation as `timeout=settings.REDIS_LOCK_TIMEOUT`
#
# REDIS_LOCK_TIMEOUT = int(os.getenv("NAUTOBOT_REDIS_LOCK_TIMEOUT", "600"))
# How frequently to check for a new Nautobot release on GitHub, and the URL to check for this information.
#
# RELEASE_CHECK_TIMEOUT = 24 * 3600
# RELEASE_CHECK_URL = None
# Remote auth backend settings
#
# REMOTE_AUTH_AUTO_CREATE_USER = False
# REMOTE_AUTH_HEADER = "HTTP_REMOTE_USER"
# Job log entry sanitization and similar
#
# SANITIZER_PATTERNS = [
# # General removal of username-like and password-like tokens
# (re.compile(r"(https?://)?\S+\s*@", re.IGNORECASE), r"\1{replacement}@"),
# (re.compile(r"(username|password|passwd|pwd)(\s*i?s?\s*:?\s*)?\S+", re.IGNORECASE), r"\1\2{replacement}"),
# ]
# Configure SSO, for more information see docs/configuration/authentication/sso.md
#
# SOCIAL_AUTH_POSTGRES_JSONFIELD = False
# By default uploaded media is stored on the local filesystem. Using Django-storages is also supported. Provide the
# class path of the storage driver in STORAGE_BACKEND and any configuration options in STORAGE_CONFIG.
# These default to None and {} respectively.
#
# STORAGE_BACKEND = 'storages.backends.s3boto3.S3Boto3Storage'
# STORAGE_CONFIG = {
# 'AWS_ACCESS_KEY_ID': 'Key ID',
# 'AWS_SECRET_ACCESS_KEY': 'Secret',
# 'AWS_STORAGE_BUCKET_NAME': 'nautobot',
# 'AWS_S3_REGION_NAME': 'eu-west-1',
# }
# Reject invalid UI/API filter parameters, or discard them while logging a warning?
#
# STRICT_FILTERING = is_truthy(os.getenv("NAUTOBOT_STRICT_FILTERING", "True"))
# UI_RACK_VIEW_TRUNCATE_FUNCTION
#
# def UI_RACK_VIEW_TRUNCATE_FUNCTION(device_display_name):
# """Given device display name, truncate to fit the rack elevation view.
#
# :param device_display_name: Full display name of the device attempting to be rendered in the rack elevation.
# :type device_display_name: str
#
# :return: Truncated device name
# :type: str
# """
# return str(device_display_name).split(".")[0]
# Time zone (default: UTC)
TIME_ZONE = os.getenv("NAUTOBOT_TIME_ZONE", "Europe/Athens")
# Date/time formatting. See the following link for supported formats:
# https://docs.djangoproject.com/en/stable/ref/templates/builtins/#date
DATE_FORMAT = os.getenv("NAUTOBOT_DATE_FORMAT", "N j, Y")
SHORT_DATE_FORMAT = os.getenv("NAUTOBOT_SHORT_DATE_FORMAT", "Y-m-d")
TIME_FORMAT = os.getenv("NAUTOBOT_TIME_FORMAT", "g:i a")
SHORT_TIME_FORMAT = os.getenv("NAUTOBOT_SHORT_TIME_FORMAT", "H:i:s")
DATETIME_FORMAT = os.getenv("NAUTOBOT_DATETIME_FORMAT", "N j, Y g:i a")
SHORT_DATETIME_FORMAT = os.getenv("NAUTOBOT_SHORT_DATETIME_FORMAT", "Y-m-d H:i")
# A list of strings designating all applications that are enabled in this Django installation.
# Each string should be a dotted Python path to an application configuration class (preferred),
# or a package containing an application.
# https://docs.nautobot.com/projects/core/en/latest/configuration/optional-settings/#extra-applications
# EXTRA_INSTALLED_APPS = []
LOG_LEVEL = "DEBUG" if DEBUG else "INFO"
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"normal": {
"format": "%(asctime)s.%(msecs)03d %(levelname)-7s %(name)s :\n %(message)s",
"datefmt": "%H:%M:%S",
},
"verbose": {
"format": "%(asctime)s.%(msecs)03d %(levelname)-7s %(name)-20s %(filename)-15s %(funcName)30s() :\n %(message)s",
"datefmt": "%H:%M:%S",
},
},
"handlers": {
"normal_console": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "normal",
},
},
"loggers": {
"django": {"handlers": ["normal_console"], "level": LOG_LEVEL},
"nautobot": {
"handlers": ["normal_console"],
"level": LOG_LEVEL,
},
},
}
AUTHENTICATION_BACKENDS = [
'django_auth_ldap.backend.LDAPBackend',
'nautobot.core.authentication.ObjectPermissionBackend',
]
import ldap
# Server URI
AUTH_LDAP_SERVER_URI = "ldaps://msad-ldap-server:636"
# The following may be needed if you are binding to Active Directory.
AUTH_LDAP_CONNECTION_OPTIONS = {
ldap.OPT_REFERRALS: 0
}
# Set the DN and password for the Nautobot service account.
AUTH_LDAP_BIND_DN = "CN=ldapuser,OU=usercategory,OU=Users,DC=domainname,DC=gr"
AUTH_LDAP_BIND_PASSWORD = "thepassword"
# Include this setting if you want to ignore certificate errors. This might be needed to accept a self-signed cert.
# Note that this is a Nautobot-specific setting which sets:
# ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
#LDAP_IGNORE_CERT_ERRORS = True
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_NEVER)
#AUTH_LDAP_START_TLS = False
from django_auth_ldap.config import LDAPSearch
# This search matches users with the sAMAccountName equal to the provided username. This is required if the user's
# username is not in their DN (Active Directory).
AUTH_LDAP_USER_SEARCH = LDAPSearch("OU=Personnel,OU=Users,DC=domainname,dc=gr",
ldap.SCOPE_SUBTREE,
"(sAMAccountName=%(user)s)")
# If a user's DN is producible from their username, we don't need to search.
AUTH_LDAP_USER_DN_TEMPLATE = None
# You can map user attributes to Django attributes as so.
AUTH_LDAP_USER_ATTR_MAP = {
"first_name": "givenName",
"last_name": "sn",
"email": "mail"
}
from django_auth_ldap.config import LDAPSearch, NestedGroupOfNamesType
# This search ought to return all groups to which the user belongs. django_auth_ldap uses this to determine group
# hierarchy.
AUTH_LDAP_GROUP_SEARCH = LDAPSearch("dc=domainname,dc=gr", ldap.SCOPE_SUBTREE,
"(objectClass=group)")
AUTH_LDAP_GROUP_TYPE = NestedGroupOfNamesType()
# Define a group required to login.
AUTH_LDAP_REQUIRE_GROUP = "CN=SimpleNautobotusers,OU=Groups,OU=Users,DC=bankofgreece,DC=gr"
#AUTH_LDAP_MIRROR_GROUPS = True
AUTH_LDAP_MIRROR_GROUPS = ["groups","you","allow","to","be","mirrored","0047"]
# Define special user types using groups. Exercise great caution when assigning superuser status.
AUTH_LDAP_USER_FLAGS_BY_GROUP = {
"is_active": "CN=SimpleNautobotusers,OU=Groups,OU=Users,DC=domainname,DC=gr",
"is_staff": "CN=NautobotOperators,OU=Groups,OU=Users,DC=domainname,DC=gr",
"is_superuser": "CN=NautobotAdmins,OU=Groups,OU=Users,DC=domainname,DC=gr"
}
# For more granular permissions, we can map LDAP groups to Django groups.
AUTH_LDAP_FIND_GROUP_PERMS = True
# Cache groups for one hour to reduce LDAP traffic
AUTH_LDAP_CACHE_GROUPS = True
AUTH_LDAP_CACHE_TIMEOUT = 3600
Ok, let’s address some of the sections of the local.env file.
The first few sections are about setting the values for variables referenced in the nautobot_config.py file that are about the basic operation of Nautobot, such as usernames and passwords for the basic services and the nautobot api, timezone, time and date format, nautobot admin user credentials, top and bottom banner if you want to define them.
Anything added on top also has to be defined: LDAP auth backend, napalm secrets for accessing the configuration for devices but also more secrects so that you can store jobs or configs in a gitlab server.
If you want Nautobot to be able to perform more functions towards other tools or sources of information you will need to provide for Secrets inside Nautobot. Defining environment variables is the way to make those secrets available in the containers but to use them in Nautobot you need to define them in the DB, using the GUI. You can find more info here. Keep in mind you can define and use external sources for secrets such as Hashicorp Vault for example, using the Nautobot Secrets Providers plugin. Secrets are organized in secret groups. In this case we are using three different groups, one for Napalm, one for Gitlab and one for Cisco Prime Infrastructure (we check for documentation drift between Nautobot and Cisco Prime Infrastructure using jobs that querry the Cisco Prime Infrastructure REST API – more on jobs later). If you don’t have Cisco Prime Infrastructure in your environment you can ignore those settings and remove them, or maybe adapt them for another piece of software you may be using.
In the nautobot_config.py file most of the options are now commented out because their default values are now in the code. There are exceptions, such as defining a way to check whether a mysql db has been declared instead of postgress, additional settings you may want to make available and of course the ldap section which is not previewed at all.
About the additional settings you may find in this case, some of them have to do with providing more options for napalm or using secrets in case you want to run code from within the containers towards the network devices (e.g I use a job to check some documentation data against configuration in the devices themselves). Of course all those values are defined in local.env, not in the config file.
The LDAP section is pretty much as it was in the netbox days because it’s essentialy ldap config for django-ldap. So for now all the values are in there. They could have been replaced with environment variables too, and provide the values in local.env. I may get to that eventually at some point.
I left the plugins section last. I will leave that until two paragraphs later where I talk about plugins.
Serving Nautobot content over HTTPs with Nginx
Using nginx in this context is no different than using it in any other context as webserver or reverse proxy. Essentialy you have a web server (uWSGI) that you don’t want to allow access to from the Nautobot users, but prefer instead to front it with NGINX. You can do that with the VM based install and there’s a very nice part of the guide guiding you how to do that, here. It also includes how to create a self-signed certificate for it. You can also create a CSR so you can get a certificate signed by a CA. Depending on whether you have a private CA available in your oganization (as I do) or use public certificates for your server, the process is similar: the CSR is created with the parameters defined according to what the CA demands. When the certificate is published you get back a couple of files, one with the public part of the certificate and one with the private key. Both of those files need to be included in your docker container for nginx and referenced in the nginx configuration file. In our case they are mounted on the nginx container using a generic name (nginx.crt, nginx.key.pem). Here is the nginx config mounted on the nginx container:
upstream uwsgi-backend {
server nautobot:8080;
}
server {
listen 443 ssl;
server_name servername.domain.gr;
ssl_certificate /etc/ssl/certs/nginx.crt;
ssl_certificate_key /etc/ssl/private/nginx.key.pem;
#access_log /var/log/nginx/host.access.log main;
location / {
proxy_pass http://uwsgi-backend;
proxy_set_header X-Forwarded-Host $http_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"';
}
#access_log off;
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
server {
# Redirect HTTP traffic to HTTPS
listen 80;
server_name _;
return 301 https://$host$request_uri;
}
Two things to note here:
- The server name needs to correspond to what is configured in the certificate and to be able to be resolved in your network so that users can access the server.
- There is a upstream uwsgi-backend reference to the main nautobot container. That’s why you see ‘nautobot‘ there instead or a real hostname or ‘localhost‘ or a real ip address. What is ‘nautobot‘ in this context? It’s the name of the service defined in docker-compose.yml. That’s how the container based services know each other in the setup we have defined. It’s like DNS for them. It’s why we have referenced the db host as ‘postgress‘ in the nautobot config file, as well.
Using additional python packages with Nautobot
It’s quite possible you may want to develop a great deal of your own code to use within the Nautobot containers. You will probably need to install additional Python packages in the containers. This needs to be addressed during the creation of the custom Nautobot image. You may have noticed either from the documentation for the VM based install or from the dockerfile itself (here or in the docker-compose Github repository) that the installation for nautobot happens with the nautobot user in his own virtual environment. To install those packages you need to use the nautobot user. For system packages it’s the opposite. Those need to be installed by root (USER0), so that’s why in our case where we need the django-ldap support and then additional python packages, we switch to USER0 to install the apt packages as root in the builder stage and then later on switch to the nautobot user for the python packages in the final stage. Copying over files also happens as root, but changing ownership to the nautobot user where necessary. The file with the list of those packages contains the following in my case (local_requirements.txt) but your needs may vary:
netmiko
nornir
nornir-utils
nornir-netmiko
nornir-nautobot
nornir_pyxl
nornir_netconf
xmltodict
ipdb
requests
nornir-inspect
You need to be aware that there may be conflicts between versions. You will get that if you check the logs produced during the building of the image.
Nautobot plugins
Nautobot plugins enhance the capabilities of Nautobot in an organized and consice manner. The list and use of Nautobot plugins is referenced in the Nautobot documentation and website. Some are very popular and provide very usefull capabilities. In my case, you already saw the list in the config:
PLUGINS = [
"nautobot_device_onboarding",
"nautobot_device_lifecycle_mgmt",
"nautobot_bgp_models",
"nautobot_plugin_nornir",
"nautobot_golden_config"
]
To install plugins you need to install them first in your custom docker image, just as you would install python packages, and then declare them in the nautobot config as above. You should not forget to fill the plugin configuration section appropriately, depending on what you want to do with each plugin (look further up in nautobot_config.py file contents). In our case I have put the plugin package list in a separate file, called plugin_requirements.txt. These are the contents:
nautobot-device-onboarding
nautobot-device-lifecycle-mgmt
nautobot-bgp-models
nautobot-plugin-nornir
nautobot-golden-config
Pretty simple, isn’t it?
Nautobot Jobs
That’s one of the best features of Nautobot. The ability to define your own code and turn your favorite Network SSoT into an automation platform enriched with all the necessary information you need to make it succesful. Jobs are essentialy python code whereas plugins are Django apps. Jobs support a great many ways to customize data entry, as well as exporting results. It’s all in the documentation. Make sure you understand how each part needs to be written for jobs to work.
There are three ways to deploy jobs in your Nautobot instance:
- As files, locally. This can be in a jobs directory under the Nautobot dir (so you need to define a respective volume mount entry) where you must at least have a file called __init__.py (it can be empty). The rest can be organized in separate files and there are specifications to consider but there is one catch: you can’t import code from other files to be used in your jobs code, only from python packages (installed from a python package repo and imported in your file). If you want the ability to import code or values from a different project you have two options: you can publish that code as a python package and then use it (I know sounds crazy but it’s definitely doable) or you can write your code in the form of a plugin where a plugin can contain and declare jobs (much more difficult as it’s about learning to develop in Django, but again doable after investing some time and effort.
- As files, imported by a git repo. You can define git repos in Nautobot to be used in several occasions. Importing jobs is one one them. Almost the same as the previous option but you need to setup a few more things to make it work. It’s certainly more modular and allows for jobs to be deployed in one place and synced from there.
- As part of a plugin. I already mentioned a case where packaging your code in the form of a plugin can solve some issues but there’s a of depth there and I am not the person to explain it to you. I need to learn about Django and plugin development as well. However there are excellent blog posts (here and here) and Youtube videos by NTC (part1, part2, part3, part4) to get you started, but you do need to put in the work and study.
Using git repos for Jobs
I use Gitlab for it (community, in house). To store your jobs to Gitlab or Github you do what you would do for any code repository and define your access. The default user for Gitlab for this is ‘oauth2′ and you can use the password you should have created for https access to define the secrets as we mentioned earlier. After that you need to declare the git repo in the GUI under the ‘Extensibility’ menu. This is also where you define that the repo is meant for providing jobs.
You can develop jobs using the first option while using the 2nd for storing the ones you have deployed already. In that regard, there is a way to test jobs in the cli, using the nautobot server command ran from inside your container (you can’t define arguments this way though):
nautobot-server runjob [--username <username>] [--commit] [--local] [--data <data>] <class_path>
In your seach for knowledge for jobs, don’t forget to read the respective section in the documentation thoroughly before running off to discover blog entries (#1 , #2) and youtube videos. The Nautobot documentation has lately gone through a major update, it’s beatiful, practical and packed with details you can use.
Back to the questions, please, I have unresolved issues
Ok don’t get worked up, let’s see how many of the questions originally defined are really left unanswered.
How do I include the config?
The 1rst question was not answered yet. I could leave the nautobot_config volume entry in the docker-compose.yml in comments and get the same config only from the custom image. That means however that if I just want to test something really quick, I have to rebuild the image. Keeping the volume entry in the docker-compose.yml allows me to just restart Nautobot using docker-compose restart
and be done with it. You may ask ‘why then not just use it in docker-compose and remove from the dockerfile?’
I am solving the file owneship issue in the dockerfile and I prefer to keep my options open. But you don’t have to do what I did.
Three times?
The 2nd question also remains unanswered. I run into a problem at some point as I had declared the volume entry only in the base Nautobot service/container and had not yet included it in the dockerfile. I noticed that job scheduling didn’t work, and there were a lot of other things that didn’t work. Once I figured it out, the reasoning is simple. Both the worker and the scheduler are also based on the nautobot image and need to share a lot of configuration settings to make things work. Including it only in the base nautobot service left the other services in the dark. Once inluded in all three, all problems solved.
Update CA certs?
The 3rd question was answered, but not the 4rth. You may have noticed I copy ca certificate files in the base nautobot image before I run the ca update command. The reason is that the connection with gitlab requires a valid certificate for the gitlab server, and that is possible in a private environment only if nautobot knows about the private ca so it can verify the gitlab server cert against it. So that’s what this is for. It’s an ubuntu command for updating ca certificates, not really related to Nautobot specifically.
What about the rest?
5,6 and 7th questions are answered. The last 2 questions will be answered in the next paragraphs.
Let’s recap and get to to the main point -> How to install Nautobot on Docker with docker-compose
I alread mentioned a couple of links about docker and docker-compose, but basically these are the first steps:
- Create an Ubuntu Virtual Machine, preferably the latest LTS (22.04.1),
- Take care of the basic stuff (define what you need for DHCP in netplan, set your timezone, etc),
- Install Docker and docker-compose.
- Adjust your users so that they can run docker commands.
- Use your favorite way of managing your server and editing files remotely. I use cygwin for connecting with ssh and visual studio code to edit and develop things remotely on the ubuntu server. You can even use it to attach to the docker containers you will create (you need the docker extensions) but describing how to do that is out of scope for this post.
After that, follow what is described in the next sections.
Prepare the files
Get your files prepared according to what I showed you already for the directory and file structure. I usually keep things in a directory under /opt, so /opt/nautobot-docker. About the nautobot config, I suggest ignoring it at first so that you get a working instance to copy the config from. After that you can run something like that:
docker cp nautobot-docker-nautobot-1:/opt/nautobot/nautobot_config.py ./nautobot_config.py
Edit the file according to the documentation and the examples I gave you and adapt accordingly.
Same for the local.env, Dockerfile.LDAP, local_requirements.txt, plugin_requirements.txt, nginx-default.conf, etc.
What happens if you don’t need LDAP support? What happens if you don’t need HTTPS?
For the first option I would say either install it anyway as a package and leave the LDAP section out of the config, or omit the packages and the config as well (remember to adjust the auth-backend). About the second, I will pretend I didn’t read that. You need HTTPS. And let me say, that while storing secrets as environment variables may seem acceptable in certain circumstances, it’s not enough. You should strive to get to a point where you use secret providers, such as Hashicorp Vault. It may seem complicated and does bring administrative overhead and technical debt but I think it’s worth the effort and can be used all over the cases where you need to secure your access to secrets (usernames, passwords, URLs).
Build the image
Once the files are prepared, build the image with docker-compose build --no-cache
ran from the directory where docker-compose.yml resides. You will get a lot of output in the standard out so look for any errors. Hopefully you will get a succesful build with a suggestion to use snyk to scan your image for vulnerabilities. Read the output carefully. If something doesn’t lead to a failure you may get a successful build but something may have been ommited, if it didn’t work as expected. You will run into problems later on, so you may as well spend a little more time and effort upfront.
A few usefull commands to check and manage your images:
docker image list
– list all the images contained in the docker hostdocker image rm <image name>
– remove the image named <image name>. This will only work if there is no container using that image. If you are removing the containers to upgrade you should have rundocker-compose down
before this.docker image prune
– delete all dangling image layers not tied to a specific container.
Start and Stop Nautobot
If the image is built succesfully you can start your Nautobot instance with
docker-compose up -d
If you are running this for the first time, it’s a good idea to check the logs to see if everything is working as it should, by running
docker-compose logs --follow --tail 500
You can limit the logs to a specific service by including the service name after the number. If you followed my advice and need to copy the config file so that you can adjust it, run the command I mentioned earlier.
docker cp nautobot-docker-nautobot-1:/opt/nautobot/nautobot_config.py ./nautobot_config.py
So if that is the case, you can then delete your instance (erase containers and network bridge but not volumes) and remove the images with the following:
docker-compose down
docker image rm <your custom image name>
docker image rm networktocode/nautobot:1.5.6-py3.10
docker image prune
The above command does not delete the volumes created. If you need to do that, run docker-compose down --volumes
.
Adjust the config, include it in the dockerfile and the docker-compose.yml and go again. You can check if your containers are healthy and listening with docker ps
. This will also show the container names besides all other info.
If you want to just stop your instance run docker-compose stop
To start your instance again, run docker-compose start
.
To attach to the nautobot container run docker exec -it <nautonot container name> /bin/bash
. Usually the nautobot container name is nautobot-docker-nautobot-1
. It depends on a lot of things if left at random but can also be customized. It’s important to know the name as it’s needed for the export/import process.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
025581770f8b nginx:1.21.1-alpine "/docker-entrypoint.…" 3 days ago Up 3 days 80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp nautobot-docker-nginx-1
cff2bc1bfc67 noc/nautobot-ldap:latest "sh -c 'nautobot-ser…" 3 days ago Up 3 days (healthy) 8080/tcp, 8443/tcp nautobot-docker-celery_worker-1
8d25bd182b04 noc/nautobot-ldap:latest "sh -c 'nautobot-ser…" 3 days ago Up 3 days (unhealthy) 8080/tcp, 8443/tcp nautobot-docker-celery_beat-1
278165df0ba7 noc/nautobot-ldap:latest "/docker-entrypoint.…" 3 days ago Up 3 days (healthy) 8080/tcp, 8443/tcp nautobot-docker-nautobot-1
8dc863039ebf postgres:14 "docker-entrypoint.s…" 3 days ago Up 3 days 5432/tcp nautobot-docker-postgres-1
6f83afb86296 redis:alpine "docker-entrypoint.s…" 3 days ago Up 3 days 6379/tcp nautobot-docker-redis-1
Ok, got it running. How do I upgrade Nautobot?
Suppose a new version is out. What do we do?
- Read the release notes first!!! Don’t rush into it before you know what you are getting into! There may be significant changes there.
- Get a backup of the DB (supposing you have a backup of the files you used to set things up and you can re-deploy a new instance within minutes). The process to get a backup or do a restore is referenced here but also in the next paragraph (sort of, not exactly).
- Get a snaphot of your virtual machine if you are running the docker host on a vm. Now there is a chance there are more docker based deployments on that host. I suggest stopping everything (check everything is stopped with
docker ps
) and running a package update before getting your snapshot. - Erase your instance (but not the postgres db data) with docker-compose down
- Erase the older images with the commands mentioned above (
docker image rm
.. ) - Edit Dockerfile-LDAP so that it points to a new base image tag. Consult the latest image tag for the correct python version in docker hub: https://hub.docker.com/r/networktocode/nautobot/tags . Latest version at this time is 1.5.6. There is a way to point to the latest minor version for a major version, for example 1.5 points to the latest 1.5.x. I use versions that are based on the python 3.10 image.
- Comment out the nautobot config inclusion in Dockerfile-LDAP and docker-compose.yml.
- Build the image, launch Nautobot.
- Watch the logs until it’s stable!!
- Copy the config inside the container to your docker host directory and name it accordingly, for example:
docker cp nautobot-docker-nautobot-1:/opt/nautobot/nautobot_config.py ./nautobot_config.py.156.py
- Compare the contents of the new config file with your old config file to check for changes in the config body. Adapt your original file accordingly.
- Erase the instance (again, not the db volume) and the images. Uncomment the inclusion of nautobot_config.py in Dockerfile-LDAP and docker-compose.yml.
- Build the image again, launch Nautobot, watch the logs!
- Connect, login, enjoy your new version! Remember to test everything before you erase your snapshot.
- Don’t forget to bring every other thing you shut down, back up!
Export / Import data
You may want to keep a backup outside the server or in my case you may have more than one instance, each with a specific role but using the same data.
Here are my export and import scripts for this case. Prior to these I have setup login with SSH to those servers from one to every other by creating keys and doing ssh-copy-id for the current user. This way the scp commands don’t require interactive authentication and no passwords need to be included in the scripts.
Export
#!/bin/bash
echo "removing old export if it exists"
if [ -s "nautobot.sql" ]; then
rm nautobot.sql
fi
echo "exporting data from db"
docker exec -i nautobot-docker-postgres-1 pg_dump -h localhost -U nautobot nautobot > nautobot.sql && docker cp nautobot-docker-postgres-1:/nautobot.sql /opt/nautobot-docker/
#eval $COMMAND || exit 1
echo "copying file to other servers"
echo "trying server4"
scp /opt/nautobot-docker/nautobot.sql root@server4:/opt/nautobot-docker/
echo "trying server1"
scp /opt/nautobot-docker/nautobot.sql root@server1:/opt/nautobot-docker/
echo "trying server2"
scp /opt/nautobot-docker/nautobot.sql root@server2:/opt/nautobot/
I shoud briefly explain this scenario.. The main server – server3 – is a docker host running the main production Nautobot instance. Server4 is running the DR prod instance. Server1 is the test instance on the main site. All those are based on docker and run with docker-compose. I always test everything on server1 first.
Server2 is a vm based instance where dependencies are easier to show. In the case of a VM based instance, keep in mind that env variables need to be defined in the service files!
The command doing the dump and cp runs both in the same line because right after the execution I noticed that the sql file was removed from the container, so that was the only way I found to get it out. You can use that file also as a db backup but keep in mind that the Nautobot DB is sort of relevant to the Nautobot version. Usually there are no significant differences but it’s not safe to always assume that’s the case. I have never restored a DB exported by a later release instance to an earlier release instance. It should be avoided.
import
#!/bin/bash
echo "Stop nautobot containers"
docker container stop nautobot-docker-nautobot-1
docker container stop nautobot-docker-celery_worker-1
docker container stop nautobot-docker-celery_beat-1
echo "dropping existing db"
docker exec -i nautobot-docker-postgres-1 dropdb -U nautobot nautobot
echo "creating new db"
docker exec -i nautobot-docker-postgres-1 createdb -U nautobot nautobot
echo "importing data"
cat nautobot.sql | docker exec -i nautobot-docker-postgres-1 psql -U nautobot
echo "Granting Access Rights"
docker exec -i nautobot-docker-postgres-1 psql -U nautobot -c 'GRANT ALL PRIVILEGES ON DATABASE nautobot TO nautobot;'
echo "Start nautobot containers"
docker container start nautobot-docker-nautobot-1
docker container start nautobot-docker-celery_worker-1
docker container start nautobot-docker-celery_beat-1
echo "Done importing, you can connect now"
You need to bring down the processes accessing the db before you do work on it. The message ‘you can connect now‘ is misleading. It means you can connect to the db but it takes a little more time for the GUI service to be up and running. So wait a while and then try it.
Import is supposed to happen on a different server than the one you did the export. But you can run an import on the main server as well, to restore a different “snapshot’ of the DB, if it’s usefull to you.
Other comments?
I need to say a couple of things out loud. I have felt for a long time that the deployment of Nautobot on Docker has been overlooked and not enough team effort has been invested on testing, documenting and polishing it as with other options. After thinking about for a long time and having spent almost a year using it on Docker in production, I have created an issue in the nautobot-docker-compose repo, here. I haven’t seen a big change on that situation yet. Deploying solutions on docker has been overly underestimated but I believe it’s a reality check in the Enterprise Space in Europe at least, where the market is smaller.
That said, I also have to admit to two things
- Josh Vanderaa has done exceptional work in trying to document this and make it available as a solution to everyone, and that is commendable. He has also kindly provided me with help and I have always tried to give back feedback in return. This is also a public ‘thank you’, Josh!
- I have been unofficially informed that my issue submission caused a lot of noise and opinions were expressed internally at NTC. Kubernetes was mentioned. Not long ago a series of blog articles about deploying Nautobot on Kubernetes have seen the light in the NTC blog: #1, #2, #3. I am not saying I am the reason behind this of course, that would be ridiculous. But it’s a change in direction that makes a lot of sence as a strategic move to get Nautobot more accepted in corporate environments and cloud ready infrastructure. At the same time it doesn’t cover the gap for that enterprise space where Kubernetes seems too heavy and requires significant investment from engineers involved. Remember, it’s still supposed to be deployed and used by network engineers. Maybe they can be a little invested in devops and automation but Kubernetes is a whole other beast so no safe bets there.
- NTC is also a company offering services. Keeping a balance for that can be hard.
- Help can always be sought in the NTC Nautobot slack and will probably be received. Hell, I am also in there and will always try to help others as I have received help when I needed it. If you are on that train, you are not alone.
Next steps
For me? Developping/Deploying more things
- Hashicorp Vault as secrets provider
- ChatOps solution PoC based on MS-Teams
- Deploying more custom solutions as jobs
- Using Nornir for jobs internally in nautobot with the nautobot-plugin-nornir. I think that’s a big deal. I am still waiting on more things to be published showing how to use that. I am not covered by ‘take a look at Golden-Config plugin to understand how it’s used’. Again I understand that time and resources are an issue, there are other priorities and possibly giving out the tools while your company is providing services using those tools is a hard balance. But the tools are already out there. You have a choice of keeping the ‘secrets’ to youself, of enlarging the market by showing others how to use them, or something in between. Hard choice. But I am personally not in the least confused as to which side I am on. I will get back on trying to understand things on my own in the next period of time, as soon as I get a break. If I do, I will be posting again. If others or NTC people post before I do, I will celebrate that and use the info. Simple as that.
- Integrate Nautobot with DNA Center and other tools and components just as much and more as I have integrated it with Cisco Prime Infrastructure. Our team will be using this tool and integration with this one and more components of our infrastructure (Firewall Mgmt DBs, Active Directory intrastructure components, etc) is critical for me. I would prefer to deploy this integration as jobs and plugins (I have some external working code ready).
- Enrich the data, integrating more info on racked devices, cirtuits, path panels, cables, etc.
- The sky is the limit.
For this series of posts?
- Providing examples of jobs for all kinds of operations based on what we use so far.
- Testing older coder for pynetbox with pynautobot
There will be a next post in the following days, hors-de-serie from these here, where I will describe how we wrote code to collect mac addressses from the access network using Nornir with Nautobot as the inventory and storing the data in InfluxDB v.2.x. That can be considered another use case for Nautobot and can give you a few ideas maybe.
Until then, you can find me on Twitter under @mythryll or at the nautobot slack (sorry I won’t mention my callsign there, but I am not that hard to find..
I hope this has been a useful read to you, take care!