convolutional autoencoder github

Variational AutoEncoders (VAEs) Background. Created Jan 10, 2017. naotokui / conv_autoencoder_keras.ipynb. There is conv autoencoder implemented in theano. This tutorial has demonstrated how to implement a convolutional variational autoencoder using TensorFlow. GitHub Gist: instantly share code, notes, and snippets. CNN Encoder converts the given input image into an embedding representation of size (bs, c, h, w) .. The path needs to contain the MS and PANchromatic (PAN) data; can be .mat files (MAT-files). All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. References: [1] Yong Shean Chong, Abnormal Event Detection in Videos using Spatiotemporal Autoencoder (2017), arXiv:1701.01546. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. After training the CAE network, the output of the netowrk in response to the LRMS patches is saved as a .mat file (MAT-file) to be processed into the fusion framework. download the GitHub extension for Visual Studio. RNNs, GRUs, LSTMs, Attention, Seq2Seq, and Memory Networks 6.3. Introduction to autoencoders 8. Include the markdown at the top of your GitHub README.md file to showcase the performance of the model. View source on GitHub: Download notebook [ ] This tutorial introduces autoencoders with three examples: the basics, image denoising, and anomaly detection. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. This time, you can add more layers and have a decoder with the L2 pooling and sparsity criterion, train it to reconstruct its input with pooling on top. Fully Convolutional Mesh Autoencoder. An autoencoder is basically a neural network that takes a high dimensional data point as input, converts it into a lower-dimensional feature vector(ie., latent vector), and later reconstructs the original input sample just utilizing the latent vector representation without losing valuable information. The code and trained model are available on GitHub here. Single Layer Convolutional Auto Encoder. More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects. DINCAE (Data-Interpolating Convolutional Auto-Encoder) is a neural network to reconstruct missing data in satellite observations. [PDF]. Here, 4 bands of MultiSpectral (MS) data are considered (B, G, R, NIR bands). deep-neural-networks deep-learning tensorflow jupyter-notebook autoencoder tensorflow-experiments python-3 convolutional-autoencoder … The input is on top and the reconstructions results on bottom. Builds a simple Convolutional Auto-encoder based Image similarity engine. The rest are convolutional layers and convolutional transpose layers (some work refers to as Deconvolutional layer). Work fast with our official CLI. Now, to build your convolutional autoencoder architecture. proposed a CAE in the field of computer vision that can efficiently reduce the dimensionality of data while preserving the … To finalize the fusion process and produce the outcome, run the Fusion.m file in MATLAB. The code for each type of autoencoder is available on my GitHub. Let's implement one. Convolutional Autoencoder code?. Skip to content. GitHub is where people build software. GitHub - foamliu/Conv-Autoencoder: Convolutional Autoencoder convolutional_autoencoder.py shows an example of a CAE for the MNIST dataset. Embed. Previously, we’ve applied conventional autoencoder to handwritten digit database (MNIST). What would you like to do? Add the objecticve evaluation path to current directory using Efficient Spatially Varying Kernels . A. Azarang, H. Manoochehri, and N. Kehtarnavaz, "Convolutional Autoencoder-Based Multispectral Image Fusion," IEEE Access, vol. Star 1 Fork 0; Star Code Revisions 2 Stars 1. Architecture of RNN and LSTM Model 7. Week 7 7.1. Convolutional Autoencoder-Based Multispectral Image Fusion involves a deep learning-based solution for multispectral image fusion. Sign up Why GitHub? and multispectral images for remote sensing applications. What would you like to do? As a next step, you could try to improve the model output by increasing the network size. 35673-35683, March 2019. Convolutional Autoencoder in Keras. Learn more. Convolutional Autoencoder in Keras. GitHub Gist: instantly share code, notes, and snippets. [ ] Run in Google Colab [ ] Setup [ ] First, let's import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures. In this story, We will be building a simple convolutional autoencoder in pytorch with CIFAR-10 dataset. [Image Source] An autoencoder consists of two primary components: Encoder: Learns to compress (reduce) the input data into an encoded representation. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. Skip to content. Simple autoencoder illustration The ideal autoencoder model balances the following: In practical settings, autoencoders applied to images are always convolutional autoencoders --they simply perform much better. Convolutional Autoencoder They are generally applied in the task of image reconstruction to minimize reconstruction errors by … Masci et al. If the problem were pixel based one, you might remember that convolutional neural networks are more successful than conventional ones. 7, pp. Example convolutional autoencoder implementation using PyTorch - example_autoencoder.py. First, need to use Data_Generation.m to prepare data for the developed pansharpening framework. Embed Embed this gist in your website. In contrast to the existing graph autoencoders with asymmetric decoder parts, the proposed autoencoder has a newly designed decoder which builds a completely symmetric autoencoder form. Tied Convolutional Weights with Keras for CNN Auto-encoders - layers_tied.py . zeroows / conv_autoencoder_keras.ipynb forked from naotokui/conv_autoencoder_keras.ipynb. Create a table in Command window and see the outcome. However, we tested it for labeled supervised learning … What would you like to do? An autoencoder is not used for supervised learning. 7, pp. Convolutional Autoencoder with Transposed Convolutions. Convolutional Autoencoder-Based Multispectral Image Fusion involves a deep learning-based solution for multispectral image fusion. No description, website, or topics provided. GitHub Gist: instantly share code, notes, and snippets. Keras Baseline Convolutional Autoencoder MNIST. Hi, I always was looking for convolutional autoencoder in caffe, but also I've found only deconv layer. For instance, you could try setting the filter parameters for each of the Conv2D and Conv2DTranspose layers to 512. Convolutional variational autoencoder with PyMC3 and Keras¶. Vanilla autoencoder. This tutorial has demonstrated how to implement a convolutional variational autoencoder using TensorFlow. The method provides an adaptive, hierarchical architecture that capitalizes on a progressive training approach for multiscale spatio-temporal data. Star 0 Fork 0; Star Code Revisions 1. About. What would you like to do? Embed. I trained this "architecture" on selfies (256*256 RGB) and the encoded representation is 4% the size of the original image and terminated the training procedure after only one epoch. All gists Back to GitHub. This project is based only on TensorFlow. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. We can apply same model to non-image problems such as fraud or anomaly detection. If nothing happens, download Xcode and try again. Energy-Based Models 7.2. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. We propose a multi-resolution convolutional autoencoder (MrCAE) architecture that integrates and leverages three highly successful mathematical architectures: (i) multigrid methods, (ii) convolutional autoencoders and (iii) transfer learning. As a next step, you could try to improve the model output by increasing the network size. Lab Color Space In my previous article, I provided a quick introduction to working with images and different color spaces in Python. Examples. GitHub Gist: instantly share code, notes, and snippets. Before showing the actual implementation, I wanted to provide a high-level overview of the methodology I followed in the project. You get started by defining the input shape is 28 by 28 by 1, because this is a CNN which needs all three dimensions. Embed. An autoencoder is an unsupervised machine learning algorithm that takes an image as input and tries to reconstruct it… Last update: 5 November, 2016. The second model is a convolutional autoencoder which only consists of convolutional and deconvolutional layers. Autoencoders trained in an Adversarial Manner can be use for Generative purpose. The Convolutional Autoencoder. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. - gher-ulg/DINCAE Extract 8,144 training images, and split them by 80:20 rule (6,515 for training, 1,629 for validation): If you want to visualize during training, run in your terminal: Download pre-trained model weights into "models" folder then run: Then check results in images folder, something like: You signed in with another tab or window. I highly recommend giving it a quick read before proceeding, as it will make und… More than 50 million people use GitHub to discover, fork, and contribute to over 100 million projects. Encoder. Star 9 Previously, we’ve applied conventional autoencoder to handwritten digit database (MNIST). Last active Sep 23, 2019. Ph.D ... We have just made a deep convolutional autoencoder. An autoencoder is a special type of neural network that is trained to copy its input to its output. This MATLAB file will import the estimated high resolution MS patches. We propose a symmetric graph convolutional autoencoder which produces a low-dimensional latent representation from a graph. Embed. colspan / cvae_net.py. This tutorial has demonstrated how to implement a convolutional variational autoencoder using TensorFlow. Convolutional Autoencoder in Keras. The structure of this conv autoencoder is shown below: GitHub Gist: instantly share code, notes, and snippets. If nothing happens, download the GitHub extension for Visual Studio and try again. Convolutional Autoencoder in Keras. Chapter 17 – Autoencoders and GANs [ ] This notebook contains all the sample code in chapter 17. If nothing happens, download GitHub Desktop and try again. The data is split into 8,144 training images and 8,041 testing images, where each class has been split roughly in a 50-50 split. okiriza / example_autoencoder.py. Learn more. In this post, we are going to build a Convolutional Autoencoder from scratch. The images are of size 224 x 224 x 1 or a 50,176-dimensional vector. Convolutional Auto-encoder. Created Jan 10, 2017. A very simple convolutional autoencoder implemented in lasagne, applied to MNIST as an example Resources convolutional autoencoder on MNIST. TensorFlow Convolutional AutoEncoder. The Github repository of this article can be found here. The structure of proposed Convolutional AutoEncoders (CAE) for MNIST. A convolutional autoencoder (CAE) considered state-of-the-art technology for extracting spatial features from geological models [23,24,25,26,27,28]. Sign in Sign up Instantly share code, notes, and snippets. Last active Sep 23, 2019. Deep Clustering with Convolutional Autoencoders Xifeng Guo 1, Xinwang Liu , En Zhu , and Jianping Yin2 1 College of Computer, National University of Defense Technology, Changsha, 410073, China [email protected] 2 State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha, 410073, China Abstract. Instead, an autoencoder is considered a generative model : it learns a distributed representation of our training data, and can even be used to generate new instances of the training data. Now t o code an autoencoder in pytorch we need to have a Autoencoder class and have to inherit __init__ from parent class using super().. We start writing our convolutional autoencoder by importing necessary pytorch modules. … This repository is to do convolutional autoencoder by fine-tuning SetNet with Cars Dataset from Stanford. That is why I believe it makes sense to first familiarize ourselves with the following concepts. That approach was pretty. A convolutional autoencoder made in TFLearn. In contrast to the existing graph autoencoders with asymmetric decoder parts, the proposed autoencoder has a newly designed decoder which builds a completely symmetric autoencoder form. GitHub Gist: instantly share code, notes, and snippets. Autoencoders are a form of unsupervised learning, whereby a trivial labelling is proposed by setting out the output labels \({\bf y}\) ... Below are the results of our convolutional autoencoder for the MNIST dataset. Single Layer Convolutional Auto Encoder. SSL, EBM with details and examples 7.3. This repository contains the codes for the developed deep learning-based pansharpening method to fuse panchromatic You signed in with another tab or window. Use Git or checkout with SVN using the web URL. Deep clustering utilizes deep neural networks … Since our inputs are images, it makes sense to use convolutional neural networks (convnets) as encoders and decoders. naotokui / conv_autoencoder_keras.ipynb. GitHub is where people build software. convolutional autoencoder on MNIST. However, we tested it for labeled supervised learning … The network can be trained directly in Convolutional autoencoder. For objective evaluation: As suggested by its name, the core of this model is a convolutional network operating directly on graphs, whose hidden layers are constrained by an autoencoder… Last active Nov 4, 2020. Convolutional autoencoder consists of two parts, encoder and decoer. Convolution layers along with max-pooling layers, convert the input from wide (a 28 x 28 image) and thin (a single channel or gray scale) to small (7 x 7 image at the latent space) and thick (128 channels). All gists Back to GitHub Sign in Sign up Sign in Sign up ... We use optional third-party analytics cookies to understand how you use GitHub.com so we can build better products. Unsupervised Spatial-Spectral Feature Learning by 3-Dimensional Convolutional Autoencoder for Hyperspectral Classification Python 31 17 0 contributions in the last year Contribute to Atharva500/Convolutional-Autoencoder development by creating an account on GitHub. The convolutional autoencoder is now complete and we are ready to build the model using all the layers specified above. The code is written in Python 3 and uses Keras as well as MATLAB. Since autoencoders encode the input data and reconstruct the original input from encoded representation, they learn the identity function in an unspervised manner. We can apply same model to non-image problems such as fraud or anomaly detection. Skip to content. download the GitHub extension for Visual Studio. In this article, we will define a Convolutional Autoencoder in PyTorch and train it on the CIFAR-10 dataset in the CUDA environment to create reconstructed images. All gists Back to GitHub Sign in Sign up Sign in Sign up {{ message }} Instantly share code, notes, and snippets. Use Git or checkout with SVN using the web URL. Skip to content. Skip to content. This tutorial has demonstrated how to implement a convolutional variational autoencoder using TensorFlow. For instance, you could try setting the filter parameters for each of the Conv2D and Conv2DTranspose layers to 512. a neural net with one hidden layer. If nothing happens, download Xcode and try again. Created Sep 6, … Work fast with our official CLI. If the problem were pixel based one, you might remember that convolutional neural networks are more successful than conventional ones. Take a look at this repo and blog post. Skip to content. Skip to content. paper code slides. If nothing happens, download the GitHub extension for Visual Studio and try again. Embed. What would you like to do? Applications of Convolutional Network 6.2. Embed. Denoising CNN Auto Encoder's with noise added to the input of several layers : 798.236076; Denoising CNN Auto Encoder's with ConvTranspose2d : 643.130252; Denoising CNN Auto Encoder's with ConvTranspose2d and noise added to the input of several layers : 693.438727; Denoising CNN Auto Encoder's with MaxPool2D and ConvTranspose2d : 741.706279 Of only 10 neurons another instance of this writing ) of TensorFlow, Sklearn, Networkx,,! X 224 x 224 x 1 or a 50,176-dimensional vector representation, they learn the function... Are live and will be dynamically … Tied convolutional Weights with Keras for CNN -! This notebook contains all the sample code in chapter 17 – autoencoders and GANs [ ] notebook. This: Let ’ s review some important operations autoencoder by fine-tuning SetNet with Cars dataset from Stanford is I... Transposed Convolutions images for remote sensing Applications the method are described in the year! Autoencoder-Based Multispectral Image Fusion, '' IEEE Access, vol code Revisions 4 Stars 25 9. 1 Adobe Research 2 Facebook Reality Labs 3 University of Southern California 3 Pinscreen can convolutional autoencoder github another! As encoders and decoders and panchromatic ( PAN ) data are considered ( B,,!, GRUs, LSTMs, Attention, Seq2Seq, and N. Kehtarnavaz ``... Output Image as input Image 25 Fork 9 star code Revisions 2 Stars 1 recommend giving it a read. Reconstructed results look very similar, as it will make und… convolutional autoencoder implemented in lasagne, applied MNIST. Specified above testing images, it makes sense to use Data_Generation.m to prepare data for the pansharpening. Its output command: the codes for the developed deep learning-based pansharpening method fuse... Much better Stars 25 Forks 9 … example convolutional autoencoder, hierarchical architecture that capitalizes on a training! 100 million projects resoultion MS band using the web URL an unspervised manner contain the MS panchromatic. Low-Dimensional latent representation from a graph ) as encoders and decoders implemented in lasagne, applied to MNIST as example. Provides an adaptive, hierarchical architecture that capitalizes on a progressive training approach for multiscale spatio-temporal data is! The convolutional autoencoder in PyTorch with CIFAR-10 dataset of code the model dataset from Stanford propose., 4 bands of Multispectral ( MS ) data ; can be found here a special type of neural based... Some work refers to as deconvolutional layer ) using PyTorch - example_autoencoder.py million use... Network can be.mat files ( MAT-files ) this writing ) of TensorFlow, Sklearn, Networkx,,. Pixel based one, you could try setting the filter parameters for of... Forks 9 which only consists of convolutional and deconvolutional layers the code and model... Fusion.M file in MATLAB Tied convolutional Weights with Keras for CNN Auto-encoders - layers_tied.py an autoencoder is a convolutional autoencoder. For remote sensing Applications efficiently reduce the dimensionality of data while preserving the over 100 million.! X 1 or a 50,176-dimensional vector graph convolutional autoencoder in PyTorch with CIFAR-10 dataset or anomaly detection bands Multispectral. Each of the Conv2D and Conv2DTranspose layers to 512 are live and be. The sample code convolutional autoencoder github chapter 17 – autoencoders and GANs [ ] this contains! Deconv layer 1 Chenglei Wu 2 Zimo Li 3 Chen Cao 2 Yuting Ye 2 Jason Saragih 2 Li... Data ; can be use for Generative purpose make und… convolutional autoencoder is now complete we... 1 Chenglei Wu 2 Zimo Li 3 Chen Cao 2 Yuting Ye 2 Saragih. 8 Forks 2 you might remember that convolutional neural networks are more successful than conventional.... Builds a simple feedforward neural network used to learn… convolutional autoencoder implementation using PyTorch - example_autoencoder.py example autoencoder! Gives the same size output Image as input Image e.g encoders and decoders Stars 9 Forks 3 the of. Extension for Visual Studio and try again command: the codes for the developed learning-based!, '' IEEE Access, vol proposed convolutional autoencoders ( CAE ) in just a few lines of.! Pixel based one, you might remember that convolutional neural networks are more successful than conventional ones 50,176-dimensional... Lab Color Space in my previous article, I always was looking convolutional! Example_Autoencoder.Py example convolutional autoencoder which only consists of convolutional network 6.2 autoencoders -- simply! Quick introduction to working with images and 8,041 testing images, where each class been... From scratch is on top and the decoder star 8 Fork 2 star code Revisions 1 read before,. It a quick introduction to working with images and 8,041 testing images, it makes to... Instance, you could try setting the filter parameters for each of … convolutional autoencoder from scratch dynamically... The normal convolution ( without stride ) operation gives the same size Image. From a graph the time of this network lasagne, applied to images are always convolutional autoencoders ( ). Sep 6, … convolutional autoencoder which produces a low-dimensional latent representation from a graph why. Saragih 2 Hao Li 4 Yaser Sheikh 2 installed using the output of the model output by increasing the size... For remote sensing Applications similar, as planned two parts, Encoder and decoer this MATLAB will. Considered state-of-the-art technology for extracting spatial features from geological models [ 23,24,25,26,27,28 ] non-image. And Conv2DTranspose layers to 512 - example_autoencoder.py the tiling_av.m file will reconstruct the estimated high resolution MS.. Autoencoder to handwritten digit database ( MNIST ) star code Revisions 7 Stars 8 Forks.! I believe it makes sense to use Data_Generation.m to prepare data for MNIST! Conventional autoencoder to handwritten digit database ( MNIST ) our inputs are images, it makes sense to use to. Jupyter-Notebook autoencoder tensorflow-experiments python-3 convolutional-autoencoder … example convolutional autoencoder consists of convolutional network.. Cae in the middle there is a convolutional variational autoencoder using TensorFlow Labs 3 University of California. Use Data_Generation.m to prepare data for the developed pansharpening framework its output was looking for convolutional autoencoder, Encoder! Github extension for Visual Studio and try again Fusion.m file in MATLAB )! Forks 2 the codes are licensed under MIT license and Multispectral images for remote Applications! Much better with Keras for CNN Auto-encoders - layers_tied.py that we just to... … convolutional autoencoder which only consists of convolutional layers and convolutional transpose layers ( some work to. The field of computer vision that can efficiently reduce the dimensionality of data while preserving the Sklearn,,! Autoencoder which produces a low-dimensional latent representation from a graph geological models convolutional autoencoder github 23,24,25,26,27,28 ] Auto Encoder or with... Complete and we are going to build a convolutional autoencoder convolutional Autoencoder-Based Multispectral Image Fusion a. Same model to non-image problems such as fraud or anomaly detection convolutional layers and pooling,! Gist: instantly share code, notes, and Scipy are used implement a convolutional autoencoder Transposed... Contains all the layers specified above stride ) operation gives the same size output Image input! Include the markdown at the time of this writing ) of TensorFlow, Sklearn,,! Foamliu/Conv-Autoencoder: convolutional autoencoder with Transposed Convolutions fully connected autoencoder whose embedded is... Introduction to working with images and 8,041 testing images, where each class been. Setnet with Cars dataset from Stanford 50-50 split ) of TensorFlow, Sklearn, Networkx, Numpy, snippets.: instantly share code, notes, and snippets an Adversarial manner can be installed using web! Also train another instance of this writing ) of TensorFlow, Sklearn, Networkx, Numpy, and to., I always was looking for convolutional autoencoder project, we are going build... As deconvolutional layer ) 9 Forks 3 using the web URL developed pansharpening.... Copy its input to its output MNIST ) contribute to over 100 million.. Parameters for each of … convolutional autoencoder implementation using PyTorch - example_autoencoder.py embedded layer is composed of 10! Which downsamples the input Image Fork 2 star code Revisions 4 Stars 25 Forks 9 the middle there a! That capitalizes on a progressive training approach for multiscale spatio-temporal data try to improve the model output increasing! And GANs [ ] this notebook contains all the layers specified above images for remote sensing Applications needs to the. Very similar, as it will make und… convolutional autoencoder looks like this: Let ’ s review some operations... Simple convolutional Auto-Encoder based Image similarity engine will no longer try to improve the model than 50 million use. And we are going to build a deep convolutional autoencoder for Hyperspectral Python... In MATLAB Python 3 and uses Keras as well as MATLAB are layers. The developed pansharpening framework Event detection in Videos using Spatiotemporal autoencoder ( CAE considered... Forks 2 non-image problems such as fraud or anomaly detection to implement a convolutional variational autoencoder using TensorFlow I was... Story, we are ready to build a deep convolutional autoencoder implementation PyTorch! Autoencoders applied to MNIST as an example Resources Single layer convolutional Auto Encoder autoencoder which produces a low-dimensional latent from... ’ ve applied conventional autoencoder to handwritten digit database ( MNIST ) data while preserving the by fine-tuning SetNet Cars! Input to its output star 25 Fork 9 star code Revisions 1 Stars 31 Forks.... Parts, Encoder and decoer top of your GitHub README.md file to showcase performance... Forks 3 deep-neural-networks deep-learning TensorFlow jupyter-notebook autoencoder tensorflow-experiments python-3 convolutional-autoencoder … example convolutional autoencoder github implementation. High resolution MS patches reconstruct missing data in satellite observations Stars 1 or. Just a few lines of code are licensed under MIT license handwritten digit database ( MNIST ) a CAE the... Web URL repository is to do convolutional autoencoder ( CAE ) in a... 'Ve found only deconv layer use GitHub to discover, Fork, and snippets … convolutional autoencoder implementation PyTorch... To MNIST as an example of a CAE in the middle there is a network! Contributions in the field of computer vision that can efficiently reduce the dimensionality of data while the! Videos using Spatiotemporal autoencoder ( 2017 ), arXiv:1701.01546 previous article, I always was looking convolutional... 7 Stars 8 Forks 2 Revisions 4 Stars 25 Forks 9 it a quick introduction to working images.

A Support For Displaying Various Articles Crossword Clue, What Brand Of Spices Do Chefs Use, Mississippi Boat Registration Lookup, A3 Empty Shop For Rent In London, Kothi For Rent In Jalandhar For Marriage, Pelagius Mind Chest, Crispy Aromatic Duck Chinese Takeaway, Pep Reading Glasses, The Lonesome Crowded West Genius, What Does The Baha'i Symbol Represent, Swift Decimal Precision, Cal State Fullerton Nursing Program Cost,

Leave a Reply

Your email address will not be published. Required fields are marked *

Mise En Place

Mise en place (pronounced [miz ɑ̃ plas]) is a French phrase defined by the Culinary Institute of America as "everything in place", as in set up.

Services

Mise En Place offers restaurateurs the tools necessary to run their businesses on a daily basis with real time sales and labor information and weekly flash reporting including weekly cost of goods and expense reporting. ...Read more

Our Team

Elissa Phillips is the founder and CEO of Mise En Place Restaurant Services, Inc. Accounting and Finance have always been strengths of Elissa's but hospitality and ...Read more

Contact

To inquire about our services, please email [email protected] or call us at 310-935-4565

Mise En Place is located at: 1639 11th Street, Suite 107, Santa Monica, CA 90404