If you’ve tried running Packer with CentOS on AWS, you probably noticed this:
sudo: sorry, you must have a tty to run sudo
That’s because by default CentOS requires a tty for sudo, as a security measure. That’s easily worked around, by adding:
"ssh_pty": true
to your builder configuration.
Now, what happens when you try to use Ansible to provision your instance? If you’re starting out with a basic AMI (
like ami-7abd0209
for CentOS 7), you’ll probably try something like this:
|
|
Which will install Ansible and run the playbook. Well, it will try running the playbook and throw this error:
fatal: [default]: UNREACHABLE! => {"changed": false, "msg": "SSH Error: data could not be sent to the remote host. Make sure this host can be reached over ssh", "unreachable": true}
That’s because Ansible’s provisioner in Packer doesn’t work with "ssh_pty": true
. There’s no way to tell Packer to
change that parameter between two provisioners (use true
for shell builder and false
for Ansible). The easiest way
to work around this is to have two packer templates - one with the shell provisioner which will install Ansible on your
system, and the other with Ansible provisioner which will do the actual work. After that, there’s only one more issue to
deal with:
fatal: [default]: FAILED! => {"changed": false, "failed": true, "module_stderr": "sudo: sorry, you must have a tty to run sudo\n", "module_stdout": "", "msg": "MODULE FAILURE", "parsed": false}
Again, sudo in CentOS requires tty by default, and we can’t have it on when using Ansible. To solve this, extend your shell provisioner code like this:
|
|
The last line will comment out the requiretty
option in sudoers, which will allow Ansible’s provisioner to connect
without a tty.
tl;dr
To use Packer and Ansible with CentOS, you need two packer templates:
First one to install Ansible and disable the default requiretty
:
|
|
Second one to actually run your Ansible playbook:
|
|