In this paper. we propose a framework (ALMO) for any-shot learning from multi-modal observations. Using training data containing both objects (inputs) and class attributes (side information) from multiple modalities. ALMO embeds the high-dimensional data into a common stochastic latent space using modality-specific encoders. https://pipingrockers.shop/product-category/baby-powder/