Displaying all 2 publications

Abstract:
Sort:
  1. Mustaza SM, Elsayed Y, Lekakou C, Saaj C, Fras J
    Soft Robot, 2019 06;6(3):305-317.
    PMID: 30917093 DOI: 10.1089/soro.2018.0032
    Robot-assisted surgery is gaining popularity worldwide and there is increasing scientific interest to explore the potential of soft continuum robots for minimally invasive surgery. However, the remote control of soft robots is much more challenging compared with their rigid counterparts. Accurate modeling of manipulator dynamics is vital to remotely control the diverse movement configurations and is particularly important for safe interaction with the operating environment. However, current dynamic models applied to soft manipulator systems are simplistic and empirical, which restricts the full potential of the new soft robots technology. Therefore, this article provides a new insight into the development of a nonlinear dynamic model for a soft continuum manipulator based on a material model. The continuum manipulator used in this study is treated as a composite material and a modified nonlinear Kelvin-Voigt material model is utilized to embody the visco-hyperelastic dynamics of soft silicone. The Lagrangian approach is applied to derive the equation of motion of the manipulator. Simulation and experimental results prove that this material modeling approach sufficiently captures the nonlinear time- and rate-dependent behavior of a soft manipulator. Material model-based closed-loop trajectory control was implemented to further validate the feasibility of the derived model and increase the performance of the overall system.
  2. Elizar E, Zulkifley MA, Muharar R, Zaman MHM, Mustaza SM
    Sensors (Basel), 2022 Sep 28;22(19).
    PMID: 36236483 DOI: 10.3390/s22197384
    In general, most of the existing convolutional neural network (CNN)-based deep-learning models suffer from spatial-information loss and inadequate feature-representation issues. This is due to their inability to capture multiscale-context information and the exclusion of semantic information throughout the pooling operations. In the early layers of a CNN, the network encodes simple semantic representations, such as edges and corners, while, in the latter part of the CNN, the network encodes more complex semantic features, such as complex geometric shapes. Theoretically, it is better for a CNN to extract features from different levels of semantic representation because tasks such as classification and segmentation work better when both simple and complex feature maps are utilized. Hence, it is also crucial to embed multiscale capability throughout the network so that the various scales of the features can be optimally captured to represent the intended task. Multiscale representation enables the network to fuse low-level and high-level features from a restricted receptive field to enhance the deep-model performance. The main novelty of this review is the comprehensive novel taxonomy of multiscale-deep-learning methods, which includes details of several architectures and their strengths that have been implemented in the existing works. Predominantly, multiscale approaches in deep-learning networks can be classed into two categories: multiscale feature learning and multiscale feature fusion. Multiscale feature learning refers to the method of deriving feature maps by examining kernels over several sizes to collect a larger range of relevant features and predict the input images' spatial mapping. Multiscale feature fusion uses features with different resolutions to find patterns over short and long distances, without a deep network. Additionally, several examples of the techniques are also discussed according to their applications in satellite imagery, medical imaging, agriculture, and industrial and manufacturing systems.
Related Terms
Filters
Contact Us

Please provide feedback to Administrator (afdal@afpm.org.my)

External Links