Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)CM
Posts
3
Comments
48
Joined
2 yr. ago
  • It's almost always cloudflare. Fucking cunts won't ever allow my browser to any site using their "services". Then there is Hcaptcha, "Solve my puzzles till the end of time in a fucking loop, and no, you're never getting into the site". I hate them

  • Bash @lemmy.ml
    cmysmiaczxotoy @lemm.ee

    Bash script to download and search youtube subtitles and output clickable timestamped urls

    cross-posted from: https://lemm.ee/post/23155648

    Here is the script.

     undefined
        
    
    #!/usr/bin/env bash
    # Download and search youtube subs
    # deps yt-dlp ,awk, perl, any one or more of either ugrep, ripgrep, grep
    # usage "script youtube_url"
    
    
    main() {
        url="$@"
        check_if_url
        get_video_id
        search_for_downloaded_matching_files
        set_download_boolean_flag
        download_subs
        read_and_format_transcript_file
        echo_description_file
        user_search
    }
    
    
    # Iterate over the array and add items to the new array if they match the regex
    check_if_url() {
        local regex='^https://[^[:space:]]+$'
            if ! [[ $url =~ $regex ]]; then
                echo "Invalid input. Valid input is a url matching regex ${regex}"
                exit 1
            fi
    }
    
    
    get_video_id() {
        video_id=$(echo "$url" | sed -n 's/.*v=\([^&]*\).*/\1/p')
    }
    
    
    search_for_downloaded_matching_files() {
        # Find newest created files matching the
      
    bash @lemm.ee
    cmysmiaczxotoy @lemm.ee

    Bash script to download and search youtube subtitles and output clickable timestamped urls

    Here is the script.

     undefined
        
    
    #!/usr/bin/env bash
    # Download and search youtube subs
    # deps yt-dlp ,awk, perl, any one or more of either ugrep, ripgrep, grep
    # usage "script youtube_url"
    
    
    main() {
        url="$@"
        check_if_url
        get_video_id
        search_for_downloaded_matching_files
        set_download_boolean_flag
        download_subs
        read_and_format_transcript_file
        echo_description_file
        user_search
    }
    
    
    # Iterate over the array and add items to the new array if they match the regex
    check_if_url() {
        local regex='^https://[^[:space:]]+$'
            if ! [[ $url =~ $regex ]]; then
                echo "Invalid input. Valid input is a url matching regex ${regex}"
                exit 1
            fi
    }
    
    
    get_video_id() {
        video_id=$(echo "$url" | sed -n 's/.*v=\([^&]*\).*/\1/p')
    }
    
    
    search_for_downloaded_matching_files() {
        # Find newest created files matching the video_id
        transcript_file="$(  /usr/bin/ls -t --time=creation "$PWD"/*${video_id}*\.vtt 2>/dev/null | head -n 1  )"
        descr
      
  • Category Losses
    Aircraft 324
    Anti-Aircraft Warfare Systems 610 +1
    Armoured Personnel Vehicle 10752 +60
    Artillery Systems 8175 +38
    Cruise Missiles 1610
    Helicopters 324
    MLRS 926 +3
    Personnel ~347160 +1090
    Special Equipment 1198 +4
    Submarines 1
    Tanks 5783 +44
    UAV Operational-Tactical Level 6290 +12
    Vehicles & Fuel Tanks 10822 +56
    Warships/Boats 22
  • Bash @lemmy.ml
    cmysmiaczxotoy @lemm.ee

    Fast youtube download bash script using custom build of aria2

    I made a script that downloads from youtube super fast using a custom aria2 build.

    Aria2 https://github.com/P3TERX/Aria2-Pro-Core/releases

    ffmpeg build https://github.com/yt-dlp/FFmpeg-Builds/releases

    I choose ffmpeg-master-latest-linux64-gpl.tar.xz

     undefined
        
    #!/usr/bin/env bash
    #set -x
    
    if [[ -z $@ ]]; then
        echo "specify download url"
        exit
    fi
    
    dir_dl="$PWD"
    url="$@"
    
    ffmpeg_dir="$HOME/.local/bin.notpath/"
    download_archive_dir="$HOME/Videos/yt-dlp/"
    download_archive_filename=".yt-dlp-archived-done.txt"
    
    mkdir -p "$download_archive_dir"
    
    youtube_match_regex='^.*(youtube[.]com|youtu[.]be|youtube-nocookie[.]com).*$'
    
    if [[ "$1" =~ $youtube_match_regex ]]; then
        url="$(echo "$@" | perl -pe 's/((?:http:|https:)*?\/\/(?:www\.|)(?:youtube\.com|m\.youtube\.com|youtu\.|#youtube-nocookie\.com).*(?:c(?:hannel)?\/|u(?:ser)?\/|v=|v%3D|v\/|(?:a|p)\/(?:a|u)\/\d.*\/|watch\?|vi(?:=|\/)|\/#embed\/|oembed\?|be\/|e\/)([^&?%#\/\n]+)).*/$1/gm')"
        yt-dlp \
        --check-formats \
        --clean-
      
  • I have always been a sailor but did pay for Netflix for 6-7 years mainly to share the account with others. When content evaporated I dropped it. I paid HBO streaming to support Game of Thrones and then dropped it too. I always watched WEB-DL even when paying for a service because of quality