Skip Navigation

Posts
5
Comments
2
Joined
3 yr. ago

  • After some tinkering, **yes ** this indeed was my issue. The logs for pictrs and lemmy in particular were between 3 and 8 gb only after a couple weeks of info level logging.

    Steps to fix (the post above has more detail but adding my full workflow in case that helps folks, some of this wasn't super apparent to me) - these steps assume a docker/ansible install:

    1. SSH to your instance.
    2. Change to your instance install dir

    most likely: cd /srv/lemmy/{domain.name}

    1. List currently running containers

    docker ps --format '{{.Name}}'

    Now for each docker container name:

    1. Find the path/name of the associated log file:

    docker inspect --format='{{.LogPath}}' {one of the container names from above}

    1. Optionally check the file size of the log

    ls -lh {path to log file from the inspect command}

    1. Clear the log

    truncate -s 0 {path to log file from the inspect command}

    After you have cleared any logs you want to clear:

    1. Modify docker-compose.yml adding the following to each container:

     
        
    logging:
          driver: "json-file"
          options:
            max-size: "100m"
    
      

    1. Restart the containers

    docker-compose restart

  • UPDATE:

    If anyone else is running into consistently rising disk I am pretty sure this is my issue (RE logs running with no cap):

    https://lemmy.eus/post/172518

    Trying out ^ and will update with my findings if it helps.