The fork cell and von Economo neuron, which are found in the insular cortex and/or the anterior cingulate cortex, are defined by their unique morphologies. Their shapes are not pyramidal; the fork cell has two primary apical dendrites and the von Economo neurons are spindle-shaped (bipolar). Presence of such neurons are reported only in the higher animals, especially in human and great ape, indicating that they are specific for most evolved species. Although it is likely that these neurons are involved in higher brain function, lack of results with experimental animals makes further investigation difficult. We here ask whether equivalent neurons exist in the mouse insular cortex. In human, Fezf2 has been reported to be highly expressed in these morphologically distinctive neurons and thus, we examined the detailed morphology of Fezf2-positive neurons in the mouse brain. Although von Economo-like neurons were not identified, Fezf2-positive fork cell-like neurons with two characteristic apical dendrites, were discovered. Examination with electron microscope indicated that these neurons did not embrace capillaries, rather they held another cell. We here term such neurons as holding neurons. We further observed several molecules, including neuromedin B (NMB) and gastrin releasing peptide (GRP) that are known to be localized in the fork cells and/or von Economo cells in human, were localized in the mouse insular cortex. Based on these observations, it is likely that an equivalent of the fork cell is present in the mouse.
There are three primary objectives of this work; first: to establish a gas concentration map; second: to estimate the point of emission of the gas; and third: to generate a path from any location to the point of emission for UAVs or UGVs. A mountable array of MOX sensors was developed so that the angles and distances among the sensors, alongside sensors data, were utilized to identify the influx of gas plumes. Gas dispersion experiments under indoor conditions were conducted to train machine learning algorithms to collect data at numerous locations and angles. Taguchi's orthogonal arrays for experiment design were used to identify the gas dispersion locations. For the second objective, the data collected after pre-processing was used to train an off-policy, model-free reinforcement learning agent with a Q-learning policy. After finishing the training from the training data set, Q-learning produces a table called the Q-table. The Q-table contains state-action pairs that generate an autonomous path from any point to the source from the testing dataset. The entire process is carried out in an obstacle-free environment, and the whole scheme is designed to be conducted in three modes: search, track, and localize. The hyperparameter combinations of the RL agent were evaluated through trial-and-error technique and it was found that ε = 0.9, γ = 0.9 and α = 0.9 was the fastest path generating combination that took 1258.88 seconds for training and 6.2 milliseconds for path generation. Out of 31 unseen scenarios, the trained RL agent generated successful paths for all the 31 scenarios, however, the UAV was able to reach successfully on the gas source in 23 scenarios, producing a success rate of 74.19%. The results paved the way for using reinforcement learning techniques to be used as autonomous path generation of unmanned systems alongside the need to explore and improve the accuracy of the reported results as future works.