Skip to content

Deployment

Alessandro Caproni edited this page Apr 28, 2018 · 9 revisions

I wanted to configure the IAS by adding better WS alarms as described here in WeatherStationAlarms

For that I would need to change the CDB and this is feasible as it is in our repo. My understanding is that I could change the Cdb in /usr/src/cdb. Then restart the supervisor container to see the effect of the changes. The disadvantage is that I will have to copy the changes in our repo (SFM: Not necessarily, you can just modify the files in the local computer, no need to commit to the repo). An alternative is to change the CDB in the repo and restart the entire IAS. This could be a bit slower.

What if I have to add new TFs? The TFs we have right now are in the core so I should write a new TF in the master branch of the core that is not what we want exactly. The life cycle of CDB, TFs and plugins must not follow the life cycle of the core. This is the reason to have plugin and TF in a dedicated repo. While this is true for the plugins, it is not for TFs. Filters for plugin shall also go in that repo. TFs and filters are loaded at run-time with reflection so the build succeeds even if TFs and filters are built later.

Without docker containers this is very easy to achieve as all the executables go in the $IAS_ROOT and the iasRun does all the magic building the classpath for java and scala and so on. So without dockers, I can build a new TF and install in $IAS_ROOT and restart the supervisor. It does not matter from where I build the TF.

iasRun builds the classpath by looking for jars in ../lib and then in $IAS_ROOT/lib. This means that if I am developing the supervisor in Supervisor/src, after building I can run my version of the Supervisor because the jar goes in Supervisor/lib. When finished ant/install pushes the jar in $IAS_ROOT/lib that is the integration area.

It is clear that docker container is not intended for development but IMO our configuration of container does not fit in operation either because we should be able to quickly update the CDB (SFM: you can do this now), modify and add new TFs, plugins and filters. Changes in the core, web server and display follow another life cycle: a fix in the core must be backported in the master branch to be available in operations. New features will be implemented in the develop branch.

Another problem we have is about logging. All the logs. All the logs should go in $IAS_ROOT/log to ease debugging. Checking problems in operations means to change container if we need to correlate the logs of various tools. (SFM: This is fixed in the develop branch now, just for the logs, if you pull from develop in the repo and do: "docker-compose down" and then "docker-compose up -d" the logs will be centralised in a "logs" folder)

Finally, the CDB installed in operations should be checked out of the repository so that we can change the CDB and commit changes in the repo.

The only solution I see in the short term is to deploy in operation without docker containers until we find a solution to the problem. This means

  • install kafka in /opt (this will not change often)
  • build the core (creates the IAS_ROOT)
  • install the CDB
  • run the tools with a bash/python script

We shall then refactor our docker containers. Tis is my proposal

  1. kafka serrvice stays untouched (but with a newer version of kafka)
  2. one containers builds and install $IAS_ROOT from the master branch
  3. one container installs the CDB
  4. start one container for each tool:
    • using the root installed by the other container
    • using the CDB installed by the other container
    • logging in the root

SFM: Here are my general comments: - I see the problem with the transfer function, we will try to do something like what you propose and let you know. - I think you are mixing the production with development environments, in my opinion all these changes you want to do should be done in the develop branch, tested in another machine, and then pushed to the master branch for a new release. If you consider that is something small you can always follow the git flow cycle and do a "hotfix". Still, be aware that we have not updated the webserver and the CDB to the latest changes you have in the develop branch of the Core yet. - You can run the same demo that we have in production but using the develop branches if you use the other docker-compose file "docker-compose-dev.yml, this will include the DummyPlugin for testing as well. To do this you need to run "docker-compose -f docker-compose-dev up -d". However this will not work because it is incompatible with the core. Let me know if you want to try this and I will try to fix the compatibility soon. - I think there is no need to run things without docker. In the ultimate case you can copy files from the host to the docker container, get inside the container and run processes from there also. I know it is complicated and annoying but probably less than run everything without docker from scratch. - I really think we should not mess with the installation on site right now. These new transfer functions, etc should be tested in a development environment (for example the version we have running in your office) and follow the natural cycle in order to arrive in master. - If you really really consider there is absolutely no option to keep docker as it is now, you can try to redeploy everything manually. We can provide you with a smaller docker-compose file to run the webserver, display and nginx. - You mentioned that docker is not for development but it is!! :) We use it and it is really useful as you do not need to mess and install things in the host system. Maybe when you come back we can show you the docker-compose files we use for development (they are in integration-tools/docker/develop) and give you a little docker lecture, I am sure you will love it :)

ACA Yes Sebastian, I might not know enough Dockers but believe it is great and happy to use it (and learn!) Changes in the configuration, i.e. in the CDB, are independent of the life cycle of the software. Development of filters, transfer functions and plugins is also independent of the life cycle of the core. We must be able to change a reduction rule at any time and quickly and do not have to wait for a next release. This does not mean that we do not have to follow the usual rules for the development i.e. develop, test, git flow and so no. Only the life cycle is different. For this reason I believe that all of them must go in a dedicated repository. We already have one for the plugins, we might just put there other stuff.

Clone this wiki locally