Bulletin of the American Physical Society
APS March Meeting 2023
Las Vegas, Nevada (March 5-10)
Virtual (March 20-22); Time Zone: Pacific Time
Session N00: Poster Session II (11am-2pm PST)
Wednesday, March 8, 2023
Room: Exhibit Hall (Forum Ballroom)
Sponsoring Unit: APS
Abstract: N00.00206 : Developing a Neural Network (NN) model for Analysing Fractal images to Recognize Pareidolia Phenomenon*
(University of Orgon)
(University of Orgon)
Nate Gonzales Hess
(University of Orgon)
In previous research in 2016 , it was asked of subjects to form perception from images of the computer-generated fractal. Participants were presented with fractal noise at different fractal dimensions and asked to imagine objects within the shapes. As the result, fractals with medium to low fractal dimensions tended to produce more noticeable pareidolia. This prior research inspired us to build a deep-learning-based model of pareidolia detection.
In the current research, our goal was to use Convolutional Neural Networks (CNNs) to mimic the results observed in human subjects' experiments  for pareidolia perception. The application of a CNN was motivated by the fact that they are able to learn relevant features from an image at different levels, similar to the different levels of receptive fields in the human visual system. In this regard, we attempted to build a model of a “pareidolia classifier” and in order to achieve an effective model, we had to combine classification techniques with some kind of generative component.
At the initial stage, we chose to synthesize data rather than work from an existing dataset. Because we were concerned with identifying the forms of objects, if we had worked with photographs, we would have needed to isolate every object from its background. Our approach to building object classes was to limit classes to a small number (15) and build synthetic datasets from many rotated views of 15 different 3D models. Since the fractal stimuli were black and white, we chose to create greyscale top-lit views of each 3D model, and then convert them to black and white. We used 3Ds Max to load, animate and render 15 different 3D models. The final dataset for this project contained fifteen 3D models, with 100 images for each.
Our method was to implement code that identified regions in the fractal stimuli with boundaries larger than a given size. These regions were then assigned bounding boxes and extracted to their own image files, where artifacts were cleaned up, and the images were prepared for use as testing data. Each region was then tested against the 15 object classes. Upon classification of a region as an object class, it was passed to a final stage, where it was tested against each of the 100 frames within the object class indicated in the prior step. After each region was assigned a specific frame within an object class, a new image was constructed from the fractal stimuli regions and their paired object images, and a composite of regions and object images was built. All of the testing models were pre-trained, which enabled the system to produce the composited pareidolia images in a matter of seconds.
The object class images served as the training dataset, and the fractal stimuli served as the testing dataset, with the intended goal of finding the “pareidolia” of our training data within the fractal images. In this iteration, the neural network was a simple classifier, looking for similar shapes within the fractal stimuli and the object class. We tested on stimuli from a range of fractal dimensions, starting at 1.1D (almost smooth) and moving toward 1.9D (noisy). While we will look at different architectures in the future, deep neural networks provide a useful analog for modeling biological neural processes, and CNNs are specifically relevant to the human visual system, as they mimic the function of receptive fields in the mammalian visual system.
In this project, we investigated a small range of deep computer vision possibilities but were able to achieve a primitive functioning model of human pareidolia, which is beneficial for future endeavors. Our generated results were somewhat homogeneous throughout different fractal images because we had only 15 object classes. But even with a small dataset, often the results were surprising. Sometimes for their randomness and sometimes for their accuracy. Because the more complex fractals have the more regions and so have the potential to assign more in the variety of shapes, higher fractal dimension objects produce both nonsensical and visually interesting results. As with human subjects' pareidolia, most of the ”cleanest” results were within the 1.1-1.4 fractal dimension range. For example, For the fractal dimension about 1.1 and 1.3 Usually we got around 5 or 6 object classes. And as we re moving to a higher dimension (e.g. 1.7 or 1.9) then we got more results (about 20). In summary, for this project, we built a Convolutional neural network (CNN)-based “pareidolia classifier,” and added a generative component to the model using a python program function. Then ran our model successfully on a range of fractal stimuli images, from 1.1 D to 1.9 D. Our future goals include training the current model to address the positive and negative color fields. Moreover, we will apply different neural network architectures, and further augmentation techniques (namely rotation, scaling, and skewing) to the training data, and additional partitioning techniques (e.g. instance segmentation) to cover a wider range of shape classification possibilities. As another speculative step, we would like to train generative models, where the fractals would be used as initialization seeds in a generative morph toward the training data they are paired with.
*University of Oregon
Become an APS Member
The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
1 Physics Ellipse, College Park, MD 20740-3844
Editorial Office 1 Research Road, Ridge, NY 11961-2701 (631) 591-4000
Office of Public Affairs 529 14th St NW, Suite 1050, Washington, D.C. 20045-2001 (202) 662-8700