You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been developing my first resin-backed app. I started with a Dockerfile based on resin/raspberrypi-python and ran it in a local docker container while I iterated on the first pass of application code - roughly the first 20% of the project.
After everything configured properly and I had my code mildly organized, I used resin.io to build and push the container to resin.os on a nearby raspberry pi and I switched to interactive development via ssh, iterating mostly without rebuilding the image (perhaps just 5 times over two days). The mgrenier/remote-ftp Atom package makes remote development on the rpi super easy.
Consequently, I've ended up installing a few extra packages on my dev rpi that are only useful for local development and I intend to eventually remove them from my Dockerfile. One of them was the AWS CLI interface (pip install awscli), which I'm using interactively on the pi as I add the boto3 python AWS SDK to my code.
Long story short, the aws CLI tool wasn't able to display any help files on the pi, and it took me quite a while to figure out why: because it depended on groff, and although apt-get install groff apparently produced a working groff binary in the PATH, it had in fact not been installed with the output modules (ascii & utf8 devices in groff parlance) that aws assumed would be present. Becauseapt-get via dpkg had been configured in the base image to skip installing certain documentation resources, including anything in /usr/share/groff. Commenting-out the line in question in /etc/dpkg/dpkg.cfg.d/01_nodoc then apt-get install --reinstall groff-base restored aws expected behavior.
I think it's reasonable to exclude documentation files by default in the base images to save space, along with most of the other bloaty tools these "skinny" distros do away with. But it shouldn't lead to unexpected behavior of tools like apt-get. So please consider adding a note in the docs somewhere letting users know what kinds of things their base-image's package manager is configured not to install.
Some general thoughts on the development pattern with resin.io:
It would be great if there were a couple of resin.os base images "fattened up" to support interactive development environments, but fundamentally identical to the existing "skinny" production images. Then I could
do as much initial work as possible with the dev image running in a local container (simulating serial devices, GPIO, etc if possible);
push it over to the pi when necessary (to test embedded libraries and connected peripherals) and continue to work interactively, then bounce back to the local container;
switch to the production base image and check everything still works and release/repeat. Maybe this would be as simple as making a single change to the FROM directive in the dockerfile
or if not, just maintaining dev and prod branches in git when the Dockerfiles required more elaborate changes.
Is anyone following a dev pattern like this already?
The text was updated successfully, but these errors were encountered:
100ideas
changed the title
Mention in docs: default "nodoc" dpkg config may cause unexpected apt-get behavior
Needs Documentation: dpkg config may cause unexpected apt-get behavior
Jan 7, 2017
@100ideas thanks a lot for your feedback, we will update our docs soon.
The idea for a development base images is really cool, I'm not sure if we can emulate stuff like serial devices, GPIO in docker image but will give it a try 👍
Incidentally, I've switched from using boto3 python aws library to just making calls to aws cli via subprocess.Popen('cmd-argstring', shell=True) in python, so it has turned out handy to have the aws help system working on my resin rpi dev remote.
I've been developing my first resin-backed app. I started with a Dockerfile based on
resin/raspberrypi-python
and ran it in a local docker container while I iterated on the first pass of application code - roughly the first 20% of the project.After everything configured properly and I had my code mildly organized, I used resin.io to build and push the container to resin.os on a nearby raspberry pi and I switched to interactive development via ssh, iterating mostly without rebuilding the image (perhaps just 5 times over two days). The mgrenier/remote-ftp Atom package makes remote development on the rpi super easy.
Consequently, I've ended up installing a few extra packages on my dev rpi that are only useful for local development and I intend to eventually remove them from my Dockerfile. One of them was the AWS CLI interface (
pip install awscli
), which I'm using interactively on the pi as I add the boto3 python AWS SDK to my code.Long story short, the
aws
CLI tool wasn't able to display any help files on the pi, and it took me quite a while to figure out why: because it depended ongroff
, and althoughapt-get install groff
apparently produced a workinggroff
binary in the PATH, it had in fact not been installed with the output modules (ascii & utf8 devices
in groff parlance) thataws
assumed would be present. Becauseapt-get
viadpkg
had been configured in the base image to skip installing certain documentation resources, including anything in/usr/share/groff
. Commenting-out the line in question in/etc/dpkg/dpkg.cfg.d/01_nodoc
thenapt-get install --reinstall groff-base
restoredaws
expected behavior.I think it's reasonable to exclude documentation files by default in the base images to save space, along with most of the other bloaty tools these "skinny" distros do away with. But it shouldn't lead to unexpected behavior of tools like
apt-get
. So please consider adding a note in the docs somewhere letting users know what kinds of things their base-image's package manager is configured not to install.Some general thoughts on the development pattern with resin.io:
It would be great if there were a couple of resin.os base images "fattened up" to support interactive development environments, but fundamentally identical to the existing "skinny" production images. Then I could
FROM
directive in the dockerfileor if not, just maintaining dev and prod branches in git when the Dockerfiles required more elaborate changes.
Is anyone following a dev pattern like this already?
The text was updated successfully, but these errors were encountered: