• I am currently trying to spin up an elasticsearch service. I tried to start with the official elasticsearch docker image (https://hub.docker.com/_/elasticsearch/) but the problem is that it runs either under the user elasticsearch or under the user root.

    Are there any convenient ways of modifying such an image to work under openshift? Can I somehow find all files that are accessible to the elasticsearch user and change the permissions such that it works under openshift?

    Or did anyone have any luck with a different elasticsearch image?

  • APPUiO Staff

    Which version of the image are you using?

    There are various possibilities, depending on the concrete issues. To name a few:

    You should be able to find all files owned by the elasticsearch user with find / -user elasticsearch.
    You might also want to look at https://github.com/openshift/origin-aggregated-logging/tree/master/elasticsearch, which is the source of the ElasticSearch container integrated with OpenShift. In case of APPUiO the integrated EFK (ElasticSearch, Fluentd, Kibana) stack is available at https://logging.appuio.ch/.

    Please note that you might encounter index corruption issues when running ElasticSearch/Lucene on NAS storage like NFS or GlusterFS. Should you encounter such issues on APPUiO please contact us, so that we can find a solution that works for you.

  • APPUiO Staff

    Here are some example scripts which create a working Elasticsearch Pod:

    FROM alpine:3.4
    ENV PATH $PATH:/usr/share/elasticsearch/bin
    COPY fix-permissions /usr/libexec/fix-permissions
    RUN \
      apk add --no-cache ca-certificates curl openjdk7-jre-base && \
      # install elasticsearch
      adduser -S elasticsearch && \
      echo Downloading elasticsearch... && \
      curl -skL https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-$ELASTICSEARCH_VERSION.tar.gz | tar -xz -C /tmp && \
      mv /tmp/elasticsearch* /usr/share/elasticsearch && \
      mkdir /usr/share/elasticsearch/logs /usr/share/elasticsearch/data && \
      /usr/libexec/fix-permissions /usr/share/elasticsearch && \
      # verify
      echo JAVA VERSION: && \
      java -version 2>&1 && \
      elasticsearch -v && \
      # cleanup
      rm -rf /var/cache/apk/*
    COPY elasticsearch.yml /usr/share/elasticsearch/config/elasticsearch.yml
    VOLUME ["/usr/share/elasticsearch/data", "/usr/share/elasticsearch/logs"]
    COPY start.sh /start.sh
    EXPOSE 9200 9300
    USER elasticsearch
    ENTRYPOINT ["/start.sh"]
    CMD ["elasticsearch"]


    # Fix permissions on the given directory to allow group read/write of
    # regular files and execute of directories.
    chown -R elasticsearch "$1"
    chgrp -R 0 "$1"
    chmod -R g+rw "$1"
    find "$1" -type d -exec chmod g+x {} +


    set -e
    # Deployment recommendations: http://www.elastic.co/guide/en/elasticsearch/guide/current/deploy.html
    echo Checking elasticsearch setup...
    mapmax=`cat /proc/sys/vm/max_map_count`
    filemax=`cat /proc/sys/fs/file-max`
    filedescriptors=`ulimit -n`
    echo "fs.file_max: $filemax"
    echo "vm.max_map_count: $mapmax"
    echo "ulimit.file_descriptors: $filedescriptors"
    fds=`ulimit -n`
    if [ "$fds" -lt "64000" ] ; then
      echo ""
      echo "Elasticsearch recommends 64k open files per process. you have $filedescriptors"
      echo "the docker deamon should be run with increased file descriptors to increase those available in the container"
      echo " try \`ulimit -n 64000\`"
      echo "you have more than 64k allowed file descriptors. awesome"
    if [ "$1" = 'elasticsearch' ]; then
    	echo -e '\nStarting elasticsearch...'
    	exec elasticsearch "$@"
    exec "$@"


      port: 9200
        port: 9300
      data: /usr/share/elasticsearch/data
      logs: /usr/share/elasticsearch/logs
      name: ${HOSTNAME}
    script.disable_dynamic: false

  • Yeah that seems to work for now, thanks. I will see if need a newer version though.

Log in to reply

Looks like your connection to APPUiO Discussion and Help Forum was lost, please wait while we try to reconnect.