Fork me on GitHub
Teleport

Using Teleport Machine ID with Ansible

Improve

In this guide, you will set up an Ansible playbook to run the OpenSSH client with a configuration file that is automatically managed by Machine ID.

Prerequisites

You will need the following tools to use Teleport with Ansible.

  • The Teleport Auth Service and Proxy Service version >= 9.0.0, deployed on your own infrastructure or managed via Teleport Cloud.
  • The tsh client tool version >= 9.2.3.
  • ssh OpenSSH tool
  • ansible >= 2.9.6
  • Optional tool jq to process JSON output
Machine ID and TLS Routing

TLS Routing support will be added to Machine ID in Teleport 9.3. Until that time, the Teleport Proxy Server will need to be configured with a dedicated SSH listener.

version: v1
proxy_service:
  enabled: "yes"
  listen_addr: "0.0.0.0:3023"
  ...

In addition, if you already have not done so, follow the Machine ID Getting Started Guide to create a bot user and start Machine ID.

If you followed the above guide, you are interested in the --destination-dir=/opt/machine-id flag, which defines the directory where SSH certificates and OpenSSH configuration used by Ansible will be written.

In particular, you will be using the /opt/machine-id/ssh_config file in your Ansible configuration to define how Ansible should connect to Teleport Nodes.

Step 1/2. Configure Ansible

Create a folder named ansible where all Ansible files will be collected.

mkdir -p ansible
cd ansible

Create a file called ansible.cfg. We will configure Ansible to run the OpenSSH client with the configuration file generated by Machine ID, /opt/machine-id/ssh_config.

[defaults]
host_key_checking = True
inventory=./hosts
remote_tmp=/tmp

[ssh_connection]
scp_if_ssh = True
ssh_args = -F /opt/machine-id/ssh_config

You can create an inventory file called hosts manually or use a script like the one below to generate it from your environment. Note, example.com here is the name of your Teleport cluster.

# Replace ".example.com" below with the name of your cluster.
tsh ls --format=json | jq -r '.[].spec.hostname + ".example.com"' > hosts

Step 2/2. Run a playbook

Finally, let's create a simple Ansible playbook, playbook.yaml.

The playbook below runs hostname on all hosts. Make sure to set the remote_user parameter to a valid SSH username that works with the target host and is allowed by Teleport RBAC.

- hosts: all
  remote_user: ubuntu
  tasks:
    - name: "hostname"
      command: "hostname"

From the folder ansible, run the Ansible playbook:

ansible-playbook playbook.yaml

PLAY [all] *****************************************************************************************************************************************

TASK [Gathering Facts] *****************************************************************************************************************************

ok: [terminal]

TASK [hostname] ************************************************************************************************************************************

changed: [terminal]

PLAY RECAP *****************************************************************************************************************************************

terminal : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

You are all set. You have provided your machine with short-lived certificates tied to a machine identity that can be rotated, audited, and controlled with all the familiar Teleport access controls.

Troubleshooting

In case if Ansible cannot connect, you may see error like this one:

example.host | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname node-name: Name or service not known",
    "unreachable": true
}

You can examine and tweak patterns matching the inventory hosts in ssh_config.

Try the SSH connection using ssh_config with verbose mode to inspect the error:

ssh -vvv -F /opt/machine-id/ssh_config [email protected]

If ssh works, try running the playbook with verbose mode on:

ansible-playbook -vvv playbook.yaml