FANDOM


http://diydrones.com/forum/topics/current-architecture-of-drone-autopilots-is-wrong-drone-hardware

https://www.youtube.com/watch?v=lOvP3wjVj_s&feature=youtu.be


Section headingEdit

Search

   Sign Up
   Sign In

DIY Drones

   Home
   Store
   About
   Forums
   News
   ArduCopter
   ArduPlane
   ArduRover
   PX4 Software
   Members
   All Discussions
   My Discussions
   Add

Current architecture of drone autopilots is wrong , Drone hardware architecture needs to be completely re-engineered.

   Posted by muavdrones on March 30, 2016 at 11:28pm in My Project
   View Discussions

I know this statement will raise the question who is this guy coming and telling us that current day autopilots are all wrong. Well I have been flying RC Planes from the age of 10 and have been using autopilots from the early Ardupilot of 2009 -2010 vintage till the more recent ones from 3DR, DJI, Feiyutech including their cheap clones over the last 5-6 years. I have been coding from the age of 15 and am now 21 years old.

Based on my experience with this wide range of autopilots I have come to the conclusion that hardware of majority of autopilots are adapted from the world of data based computing made for processing huge chunks of predefined data and giving a appropriate notification or display. In the case of data based computing inputs are got from low response data source like Ethernet/internet or some sensor network, this data is processed and outputs are either notifications or a display and in a few cases some very slow speed controls. Nothing where high speed control of a dynamic object is involved even on a single axis.

Hence the question : are these processors/hardware made for controlling a dynamic moving object with freedom across 3 axis’s like a drone??

After using all types of available autopilots I realized that the fundamentals of drone control at its core requires the following steps to be done repeatedly as fast as possible 1. reading sensor values and conveying them to the controller/processor 2. filtering these sensor values 3. pushing the filtered values into a PID loop 4. transferring control commands to the actuators for immediate action.

This cycle needs to be repeated over and over again the faster the better . This is what determines the stability of the drone the higher the cycle time the higher the stability .So what is needed in the case of drones is a continuous high speed input –output action reaction control system. I realized that drone control is not so much about data crunching as about speed of the control cycle.

If the use of drones has to grow developers have to be given freedom to code for their applications without compromising this core control cycle. In the case of drones a developers code resulting in a system hang will result in catastrophic outcomes like either crashs or fly aways, both which have been regularly reported in current autopilots. Achieving high control cycle speeds & isolating the flight controls is not possible with the current architecture of sequential processing, hence the future of drones is limited by the architecture of currently available autopilots.

So unless a new thought process emerges drone use cannot grow exponentially. What is needed is a motherboard that it radically different from anything available today.


I have been working on this for a while now and my first hand experience is that the moment I shifted my focus to achieving higher speed control loops with my self designed autopilot the level of stability and performance I have been able to get are awesome even in very high costal wind speeds on a small 250 mm racer. I achieved this with the most primitive of micro controller used in the first ardupilot the ATMEGA 328. Things got even better when I replaced the MPU 6050. IMU with the MPU 9250.

With my custom made Distributed Parallel Control Computing Bus I have been able to achieve Altitude hold with a total drift accuracy of less than 1 meter and a very accurate heading hold as well as GPS navigation on the 250 mm racer. All I did was to add another ATMEGA 328 in parallel to the first one to add on features.

Thanks to this I am able to completely isolate the core flight control loop from the APP development coding there by the drone is never compromised by faulty APP development coding.

Distributed parallel control computing I have found from my own experience is an architecture that really has the potential to create exponential growth in drone applications. I would be interested to know of any other ways by which others are trying to address this core unique control processing requirements of drones. Like 3 members like this

Share

Views: 1646

   ▶ Reply to This

Replies to This Discussion

Permalink Reply by James on March 31, 2016 at 4:03am

   Congratulations on building and coding your own autopilot- that is an impressive achievement.
   Apart from that, I think that you need to do some more research.
   Your post sounds ill informed. You need to better understand how the current autopilot architecture evolved, and why, before criticizing it.
   You also need to back up your claims with proper details and video.
       ▶ Reply

Permalink Reply by Patrick Poirier on March 31, 2016 at 4:14am

   Yeah, A cluster of Arduino Nano's :-).. Why not, I guess that you will get a hard time when trying to implement sensor fusion.
   Based on what you are describing, the proper avenue would be using some king of FPGA SoC , like the Xilinx Zynq.
   Good Luck
       ▶ Reply

Permalink Reply by muavdrones on April 2, 2016 at 2:28am

       Hi James & Patrick ,
       Thanks a lot for your comments/ responses to my post , sorry I could not respond earlier because of the pre weekend workload yesterday .


       @James : Thanks a lot for the appreciation of me creating my autopilot ZUPPA .
       Now to get to the rest of your comment :
       My post is not intended to criticise  the current architecture of AP’s at all but on the contrary to stimulate thought to  take AP technology to the next level of creating user friendly and APP expandable drones which will effectively make the drone  a TECH GADGET used by many for various applications  like the Computers or smart phones of today  .


       The goal of the current AP architecture was to be able fly a drone and electronically stabilize along 3 axis’s ie. to a large extent something like the early PC’s that ran on DOS or the early mobiles that we for getting a phone to be mobile . Subsequent evolutions of these gadgets have shown that the more user friendly they got and the more applications that they could be put to enhanced their usage , mass usage and consumption
       Eg : when resistive stylus touch based smart phones gave way to capacitive finger touch thanks to Steve Jobs that was technological evolution .So has the case to be for drones .


       So the current architecture has achieved the goal of flying the drone , but as I see it future AP’s need to make a drone user friendly and APP expandable.
       I believe that the current direction of using higher and higher processing power for AP’s is not the way forward as drone control is not about data processing  like the computers of the past . 
       In the case of a drone a system hang like that experienced on a PC or  smartphone is not an option.
       For Drone to become a TECH GADGET  isolating flight control from other processes and assigning highest priority to it is the way forward .


       So sorry if I have not been able to convey this intention of my post .
       Drone technology needs to evolve to make drone user friendly and APP expandable .


       My statement that AP architecture  in its current state is restricted is based on the following :
       Lets take the case of the Two leading AP’s of today :
           3DR AP’s like APM and PIX HAWK : though they are the only ones usable of both fixed and rotary wing and are APP expandable  to an extent . The user needs to have significant DIY skills to tune it to their chosen airframes . Its only 3DR’s readymade drones that might be user friendly when on full auto mode , but the AP’s are in no way mother boards that can make a drone user friendly and APP expandable .
           DJI Naza : Only usable on copters cannot be used on fixed wing and might be user friendly but APP expandability is seriously  questionable .


       I have personally experienced over the past 5 years from the early Ardu pilots till the more recent ones that  these two major  limitations are preventing ultra local manufacturing of drones.


       A Dronepilot Motherboard like the one that opened up PC assembly at the ultra local level is key
       To opening up the  potential of drone use across applications and turning it into a TECH GADGET  .


       The  intention of my post is to stimulate thinking on alternative architectures that could get any drone to be a USER FRIENDLY & APP EXPANDABLE Gadget .


       MY idea was to share my experience with one such alternative that I designed and developed that actually addressed these limitations and converts any drone into a USER FRIENDLY & APP EXPANDABLE Tech Gadget .


       Here are details of my development that reinforce the content of the post that ZUPPA my AP that uses DPC has the ability and the potential to make any drone a USER FRIENDLY & APP EXPANDABLE Tech Gadget  :
       a)      Block diagram that compares my DPC and conventional AP  architecture attached below.
       Specification : Enclosed Data Sheet ZUPPA V1


       Videos : https://youtu.be/lOvP3wjVj_s


       We have gone through multiple iterations in the design of this AP .


       ZUPPA AUTOPILOTS FLY  both copters and fixed wing .


       In fact thanks to the fact that this design isolates the flight management to core 1  I am able to work on a model of having the  APP interface open source so that any coding errors during the process of app development will not crash the drone there by reducing the risk for developers . 


       @ Patrick : your right in a way enclosed is a photo of my AP board that in a sense is like putting a couple of nanos together .
       But , I am using direct C and assembly to code primary AHRC(Attitude and Horizon Reference Controller (Core 1)) while Core 2 is done on AVR C++.
       Why Primitive Language ? Frankly coz the native reflection of my code will have a lesser impact on the actual execution cycle (additional instruction(Implicitly generated) while HIGH to LOW LANG Conversion).
       Conceptually it's like the first nano only managing flight parameters like stability, heading, altitude ie drone attitude  and since this its only function without any form of extraneous interferences it does it efficiently at a high speed . While the second nano ( core 2 ) handles the GPS maths ,velocity vector Models, user apps etc . My GPS system creates its own grid as soon as locked(PDOP based). Hence , for future SLAM algorithms (for spatial map generation) the stationary features (statics) can be placed on the grid after being identified by an external sensor ie: Xf={[xi , yi]}; can be represented on the grid.
       The Core 2 also handles all receiver commands from the user and UART interface from GCS thru module, the power manager also connects to this core as well as the GPS , hence the second core is actually more loaded than the Core1 (Kinematic Control core) . But , that doesn’t affect performance at all as the fastest correction from Core2 has to be generated at a max update rate of 50Hz (IMU Velocity correction) and its performance has no direct affect on the final drone’s performance. Even after all the maths is calculated for navigation and real time positioning , the loop cycle on the 328 is 6mS (Worst case Time Complexity) , 4mS (Best Case Time Complexity) .
       The Core 2 doesn’t have any use for the 16-bit timer , it is used it to keep check of the time from latest cycle start , and JMP the Program Counter to cycle end if it exceeds 10mS (which happens approx. every 10,000,000 Cycles if UART is updated at 50HZ , it doesn’t happen at typical <=30Hz UART update) . Some thing like a timer based instruction governor which is used to prevent system hang , hence reliability is very high .
       I have developed the code for sensor fusion not on absolutes but on relatives just executing the level of fusion required to achieve the desired goal of ultra stable flight. So repetitive high speed less accurate relative fused sensor value processing results in continuous high speed corrections that achieves the same goal of ultra stable flight .
       This in essence is the key difference between Data computing and Control computing
       eg. the imu’s vertical velocity (after external normalization and compensation) is fused with the Baro Rate reading ,this is done through a Differential eqn. based autocorrelation fuser hence based on the incremental auto correlation of barometric rate we correct the gain which fuses the vertical velocity with the barometric rate to produce a fused value as absolute , additionally I have conducted experiments by which I have made the vertical velocity absolute. So , yes I have evaluated the current fusion algorithm , understood them and implemented segments of them instead of the whole fusion block . They are not as accurate as the absolute sensor fusion algorithms , but they run much faster , In turn we can push the OP to the PID faster and give a correction faster , so we can give less accurate control input at a higher increment to achieve the ultimate goal of ultra stability . The mag is compensated using an ellipse to circle normalizer , a few rotation matrices to get the axis’s perfectly aligned with reference axis’s  and vector product compensation


       The fact that I use the ATMEGA 328’s keeps the costs down while achieving the desired goals of ultra flight stability , user friendliness  & APP expandability is a major factor from the commercial perspective as well.
       Photos of my Autopilot ZUPPA :

Attachments:

       1.jpg 1.jpg, 52 KB
       Muav2.jpg Muav2.jpg, 78 KB
       zk-v1layout.jpg zk-v1layout.jpg, 27 KB
       ▶ Reply

Permalink Reply by Patrick Poirier on April 2, 2016 at 5:42am

   I like the new concepts, it makes us think outside the box, but in the same time, we make comparison with the existing development:
   Technically, these single modules could be integrated within a multicore ARM processor and run as separate threads that could interact with each other using shared memory or any other pipeline method. Isn't  basically what any autopilot does ?
   Don't you think that adding physical devices for each additional sensors makes it more complicated -and expensive on the long run-  than just linking software modules at compilation time?
   What are the options if you require additional processing power like doing EKF, or any processing that require more memory and bandwidth  (just like the old Arduino Based Autopilot) ?
       ▶ Reply

Permalink Reply by muavdrones on Wednesday

   @patrick ,
   The method of running multiple independent threads is a very good idea and frankly , just to let you in on a secret the first version of my autopilot was developed on an arm m3 , to be precise it was the STM32F103RET6 running free rtos . My programming originated in Java (I am a core Java architecture guy ) , so I was more inclined to use threading on the rtos and crack on with the ap.


   I did that one thread for sensor sampling , one for GPS , one for kinematic processing etc..


   Then I tried it , it worked pretty well about 80-85% of the time , some times what I experienced was that anyone of the threads would reach worst case , then the thread will restart causing an inconsistent operation rate , additionally interrupts interfered with each other , so I optimized the ISR , then some times the RMA ( Resource Mgmt Agent , semaphore) did not change the HW lines in time or at all , and yes I had 2-3 fly offs. This got me thinking why is all this happening. I probed deeper and deeper right down to the elemental level , where I found.my answer.


   Now as per my tests , a 250mm racer with no payload has the highest mechanical response time of about 35-40hz , hence as per natural sampling we need to give a correction at the rate of 70-80 Hz on the safe side I say 100hz , for which we need to sample at a minimum of 200 - 500hz  as per natural sampling Fcorr=2 Fsample


   This is a strictly theoretical view , which assumes no external forces and imperfections can exist in the system. But for stable flight we need to assume everything ×10 to balance external forces.


   Here is where the problem occurs , let's probe a little deeper into how an Rtos works , now each thread has an addressed location on the flash with the rtos registers at start up , we have priorities for threads such that the semaphore can assign resources to either of the application when collision occurs . But in its true essence the rtos manages instruction time cycles and hardware resource allocation.
   What I am explaining below doesn’t matter too much if we. are wanting to perform non.real-time / static real-time  Control like say control a 3D printer, but in our case we are doing dynamic near real-time control .


    Now , let's assume two independent threads running at the same time , at high level they appear to be independent but actually there is dependency which is present in terms of priority. If say the sensor sampling thread and the gps processing thread need to be executed both first parse to the instruction buffer then based on FIFO or priority they are popped onto the ALU where they are executed , so if say a large instruction operation causes delays processing on the ALU like for example a gps floating point process , then it affects the execution consistency of the next instruction which can be a sensor sampling one , this is mitigated in certain OS’s  by jumping to next instruction in instruction buffer if one takes extra time but that leaves the current instruction unfinished.
   To mitigate this we choose higher n higher processing power.


   Secondly , hardware resource allocation problems , let's take GPS UART port (UART 1) and say a CLI UART(UART 2) port on the same parent port , when the gps gets data we have the semaphore which assigns the UART 1 to the gps parser thread , then UART 2 is assigned to CLI parser thread , yes they are independent and can never interfere , but at times it goes into a race condition especially when there are multiple interrupts n processes being executed , now the semaphore normally should be owned by 1 process , but it gets owned by 2 at same time, the UART port gets hung in this case , something like an Android app crashing when you are accessing multiple apps at a time. (Please note these observations of mine have been tested using traces , at a task operation rate of 1000-2000hz).
   Plus there is a lot of online information to support what I am saying.


   Finally, the app development creates a major problem on the core kernel , as say if the person developing the app is not very conversant with the way in which an autopilot works , if he codes infinity return statements or infinitely hold the semaphore , then it would result in a system hang(these cases are rare but cannot be ignored as we are talking of a flying object).


   Hence , I then shifted to a hardware layering it , by separating high priority & low priority tasks , and wow!! All the above problems vanished without too much loss in correction speed , in fact the thru put from the primitive sensors to motor output increased by 4 fold for my primary core (Core 1).


   Now , coming to the EKF part , yes the EKF is a very advanced SLAM algorithm which gives us real-time values in terms of vectoral orientation from the earths axis's instead of our traditional method of XYZ model of our local body. No , I don't implement the entire block of the EKF , but parts of it , like for position hold the calculated velocities from the IMU after normalization and compensation , then X, Y and ecef X , ecef Y is passed thru a preferential gain lead lag filter (where again the auto correlation of the speed accuracy from gps is used as the gain) . I have a nonlinear quadrant calculator which fuses the local frame velocities to global frame based on mag orientation . I did a comparison of algorithmic thru put and speed on an cortex M4 (the discovery board F407Z ) I was using eclipse at the time , I used the EKF from open pilot as it was directly Java/eclipse compatible. my code at the true clock of 168Mhz without delay just running on int main the EKF executed with dummy sensor values (just to understand the thru put and execution time from a complexity point of view) at (370 - 450) uS while my code was at (118-145) uS , yes my code will not be as accurate as EKF as I am calculating pseudo N,E ,D directly from mag heading , while EKF does it using the magnetic flux on each axis and the speed and heading accuracy , but , I can give a faster correction incrementally to achieve an accurate position hold.


   Looking forward to your replies as I can explain more of the research I have done.
   " ZUPPA Autopilots Goal is to take AP technology to the next level beyond what is available today "
       ▶ Reply

Permalink Reply by Patrick Poirier on Wednesday

   Well, I see that you did your homework, and you got my respect for digging so deep into the problem. :-)
   Just to get back on RTOS, I do not think that wecan get into hard real time compliant POSIX with freertos. It is not clear which small footprint OS offers these features, but it seems  that Nuttx offers a more predictable operation. Here's what Lorenz Meier wrote here a few years ago. 
   As mentioned on this thread its always a better idea to go with something that is used and tested elsewhere than to roll your own. The benefit of NuttX over FreeRTOS or ChibiOS is its more complete hardware abstraction and adherence to  standards (POSIX, which brings things like pthreads, poll(), read(), 
   Getting into granular function within distributed hardware might shift the problem into the bus architecture on my humble opinion...This is where the devil's hiding :-(
   Concerning the EKF, I must admit that doing computation on a M4 can get real good results, I am pretty impressed by the power that th STM32F4 series chips can generate - actually the last generation of PX4 -PIXRACER-  is running with this chip at  full blown EKF with no apparent problem. But the, we are now adding up cost and still have to resolve the bus architecture that could get pretty messy and expensive. 
       ▶ Reply


T3Permalink Reply by Stephen Zidek on April 2, 2016 at 8:33am

   As a mechanical engineer who has taken a few control theory classes, yep a faster loop will definitely help with stability.  However in my experience as a hobbyist, a well tuned Pixhawk multirotor is already very stable.  I would welcome faster loops with open arms, but not at the expense of any single existing feature on arducopter.  Good luck!
       ▶ Reply

Permalink Reply by muavdrones on Wednesday

   @ Stephen
   Thanks for your wishes .
   Have a look at the video link in my earlier reply , you will notice  that  the stability of the copter is solid across a range of platform sizes .
   As for features all that the arducopter offers is a  default here + in addition the hardware is expandable and APP customizable  , which is not a feature any existing Autopilot offers .
   ZUPPA kLIk is a APP expansion hardware we have created on the base ZUPPA Autopilot that is made for Mapping and Agriculture applications . ZUPPA kLIk is a complete hardware where the autopilot triggers the Camera + logs GPS lat, long , altitude and drone attitude angles on XYZ at the instant of the trigger .
   This provides Geo and Data tagged images for 3D mapping . ZUPPA kLIk is an APP specific expansion of the base ZUPPA Autopilot .
   BTW I am working on an open APP release of the ZUPPA Autopilot just to explain the feature expansion capabilities and should be posting that this week .
   Venkatesh
       ▶ Reply


DeveloperPermalink Reply by Kabir on Wednesday

   Hz are the megapixels of drones.
   A faster control loop does not translate to better physical control. Can you show real data which can prove otherwise?
       ▶ Reply

Permalink Reply by muavdrones on Wednesday

   Hi kabir ,
   1) Could not understand part 1 about Mega pixels : my post is about the autopilot not video system.
   2) Regarding the control loop question : its been answered both by myself and others like Stephen above .
   Venkat
       ▶ Reply

Permalink Reply by Micha on Wednesday

   Venkatesh, Kabir was making an comparison with the camera megapixel race. More megapixels do not make better cameras. ;)
   However as a professional photographer i must say he is wrong. Kabir, take a look at the signal to noise ratio and dynamic range of the latest high resolution Aps-c and full frame sensors compared to the lower resolution versions of just a few years ago and you will be amazed. ;)  
       ▶ Reply

Permalink Reply by Micha on Wednesday

   Well done my friend. Funny, a few weeks ago i was thinking the same thing while reading a tread about the development of new arducopter boards. The current trend of thinking is to increase cpu power and multicore setups with 3 or more sets of sensors. The first thing that crossed my mind was: how about cost? It is easy for people in richer country's to buy complex FC's but the majority of people can not afford 100usd + boards. Secondly, why not make a board that runs the basic loops with a bus and then expand on that with daughter boards that add functionality like more sensors or even dsp style boards for more advanced routines like camera recognition as found on the Dji phantom 4. This would allow users to upgrade there FC's with features they want or can afford. Also this would allow third party development of new features independent of the development of the basic FC. And that is just what you have created. Big thumbs up. If you need some beta testers, let me know.
   @Kabir: How about aliasing noise? Take a look at this: https://www.youtube.com/watch?v=-lmoKal_e4s
   Of course there is a point where more resolution becomes less important but Joshua makes some good points in this vid.
       ▶ Reply
   ‹ Previous
   1
   2
   3
   Next ›
   Page 

RSS

Welcome to DIY Drones

Sign Up or Sign In

Or sign in with:

Top Discussions 1 Current architecture of drone autopilots is wrong , Drone hardware architecture needs to be completely re-engineered. Current architecture of drone autopilots is wrong , Drone hardware architecture needs to be completely re-engineered. 2 Autotune successfull on low kv Motors, heavier Quads, advice? Autotune successfull on low kv Motors, heavier Quads, advice? 3 I'm ready to drop 3DR, go with DJI I'm ready to drop 3DR, go with DJI 4 Neo M8N and Pixhawk, No GPS fix Neo M8N and Pixhawk, No GPS fix 5 Flight mode confirmation??? Flight mode confirmation??? 6 Almost a fly-away video. A few questons. Almost a fly-away video. A few questons. 7 Price of industrial drones Price of industrial drones 8 Pixhawk beeping after arming Pixhawk beeping after arming 9 PIXHAWK with AttoPilot 180A with no ground loops monitoring only one pack PIXHAWK with AttoPilot 180A with no ground loops monitoring only one pack 10 What is my Iris+ setup worth?. What is my Iris+ setup worth?.

   RSS
   View All

DIY Drones Monthly Newsletter

Groups

   PIXHAWK
   PIXHAWK
   803 members
   BRAZILIAN GROUP DRONE
   BRAZILIAN GROUP DRONE
   199 members
   UgCS
   UgCS
   64 members
   IRIS
   IRIS
   798 members
   ArduPlane User Group
   ArduPlane User Group
   1364 members
   View All

Season Two of the Trust Time Trial (T3) Contest A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2016 Created by Chris Anderson. Powered byNing

Badges | Report an Issue | Terms of Service

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.