Open a terminal/command prompt - run cmd.exe on Windows - and paste in the command. If the browser you're using only gives you the URL and not the complete "curl" command, the command you're trying to build is basically curl [url] -o [outputfile.mp4]. It's best if you can get the complete command like the one Chrome provides, as it may include authentication cookies or other headers that omitting may prevent your download from working.
VAME is a general time series quantification method and while we used in our exemplary data pose tracking input from DeepLabCut, VAME works also with other pose estimation tools like SLEAP, DeepPoseKit or B-KinD14,15,54. In principle, other kinds of data such as a principal component time series of the video data or other sensory signals can be fed into the model. Throughout this protocol we will use the demonstration data that are available on the VAME GitHub page, which is a video of a freely moving mouse in an open-field arena (video-1.mp4) and the corresponding DLC file containing the coordinates of the virtual markers (video-1.csv). The dataset contains 29,967 frames. Note that it is possible to train a working VAME model with as little data as this to achieve good results in terms of motifs and latent space dynamics.
The VAME workflow starts by initializing a new project with the function vame.init_new_project(). It takes in four arguments; the project name, a path to the directory of the animal videos, a path that specifies the working directory where the project folder will be created, and a parameter that specifies if the used videos are .mp4 or .avi (Fig. 5, first gray box). The user needs to spell out the full path to a video such as /directory/to/your/video-1.mp4, otherwise the config.yaml file is not correctly initialized. This will create a folder with the project name and the creation date e.g., Your-VAME-Project-Jun15-2022. Within this folder four sub-folders will be created (data, model, results and videos) and a config.yaml file, see Fig. 6 for reference. Note that the video-1.csv, which contains the DLC pose estimation output, needs to be put manually into the pose_estimation folder. 041b061a72