mbohu

Members
  • Content

    470
  • Joined

  • Last visited

  • Days Won

    6
  • Feedback

    N/A

Everything posted by mbohu

  1. I have them and I do like them, but since it's the only booty suit I had in the last 2 years I can't say for sure how they compare to the regular cordura booties. Essentially they have a couple extra pieces of fabric that create an air channel, which inflates them when you put your legs fully into the airflow (see red circle). I probably notice it most, when tracking away. I sure get a lot of forward push and lift from my legs. My measurements were also slightly off and the booties are a bit longer than they should have been, so they don't fully stretch out until I point my toes to the max, and I think that air channel helps to counteract that, inflating them a bit more, even when I don't have them stretched to the max. In the picture you can also see that this is an extra area where the stitching can come off (bottom left of the fabric--so in that picture they are actually not quite doing their job), but that took about 2 years of regular usage and was extremely easy to fix. A rigger sowed it back on within 5 minutes.
  2. I'm not a DZO or airplane expert, so I'm not sure what the definition of a "short" Caravan is, but Orange Skies Skydiving in Colorado is flying this one:
  3. Somebody's got to disagree: (not about the equipment--definitely don't try to buy equipment too early and no reason to worry about it now. I waited a year and just over 100 solo jumps, because I wanted to try out a few different canopies before deciding, and am super-happy I did.) But I did read this book right as I was doing my first AFF jumps. When you can't jump and are itching to do so, this is a little bit of a substitute, and it won't send you down the wrong road, in my opinion: https://www.amazon.com/Parachute-Its-Pilot-Ultimate-Ram-Air/dp/0977627721
  4. mbohu

    covid-19

    I wish I remembered more about probability theory, so I'd know how to create the actual formula, but my guess is that this is not unlikely at all.: You currently have a base probability of about 0.08% of anyone being part of the group of confirmed infections (total US infections/total US population)--let's call it i/p Then you have the group of active members in this community: m Then you have the average group-size of family/close friends of each active member (maybe 20-30?): f So the sample size is f*m (maybe 1000-2000?) This is extremely small as a percentage of p BUT the likelihood of at least 1 member of (f*m) to also be a member of i is probably not that small But of course, the infection rate MUST BE much higher than the reported cases, simply because the percentage of how many people are tested, is so low. If testing was random, you'd have to assume that the true rate of infection (I) is: (i/t) * p where "t"=number of tested individuals. That would come out to a HUGE number. Of course it is to be assumed that it isn't quite as high, because we are mostly testing individuals, who we think have a high likelihood of being infected, but STILL: Of course, the real rate of infection is much higher than what's being reported as confirmed cases. Anyway: kallend, I hope your son recovers and everything will be well.
  5. I don't know. Us getting some even marginally better ones is pretty essential, I think. On the other hand, we'd never vote for them, so it's probably just a waste of resources.
  6. It doesn't really matter what the exact scenarios or rules are. No matter how you slice it, somewhere at the center of the program has to be some sort of decision making algorithm, which is fed the data from the sensors or even from multiple cars and road sensors. It sounds like billvon was mostly talking about using neural networks in the pattern recognition algorithms. These would be the algorithms feeding the decision making. It's quite possible that the decision making algorithms themselves are extremely limited right now, and operate on extremely simple instructions (always try to break, never drive faster than your ability to break immediately--but as you pointed out with the jaywalker, this is really not realistic--same with the curve, if you drive around a blind corner, ready for your own lane being completely blocked, you'd have to come to a virtual stop) The more data is being fed into the decision making algorithm though, the more options it will have available, and the more it will have some kind of morally relevant preferences. This would actually be much more the case in a centralized system that is aware of multiple vehicles (via vehicles sending it data and/or road sensors) It now has the ability to consider consequences for ALL vehicles and in situations that have no perfect outcome it will have to prioritize between all vehicles. Again, the more data it has, the less likely such situations will be, but the likelihood will never be zero.
  7. Well, I gave an example from skydiving already. But I guess the easiest one I would come up with for a car would be a mountain road: Say, you drive around the curve and find a truck stalled straight across the 2 lanes (=big object to avoid); on the right side is the mountain wall (bigger object to avoid), and the left side is completely free. With the rules we have in place right now ("avoid objects") we'll have some fun on the way down. Now, if you say, the response is ALWAYS just to break as hard as possible, I could not imagine that to be true, because there would be many easy (and more common) scenarios where veering to the left or right would produce much better outcomes.
  8. I think it may have been the original 2016 one that is referenced in link a) (but not the one the article is about)
  9. But aren't there clearly cases where hitting something is preferable to the alternatives? Along the lines of: better to fly into the tree than execute an extreme low turn and hit the ground full speed?
  10. Oh, I didn't know that. Very cool. So that sounds like the decision algorithm is then just based on very simple fixed parameters like "don't hit anything that is considered a threat" and the neural network and learning algorithm is only used in the determination and pattern recognition phase to determine what is a threat. That is different from what I was told and had discussed with people that were involved in other AI efforts. I would still think that in the end a more powerful system would be more sophisticated, and be trained based on outcomes, not such a simple instruction, so it would choose its decisions based on learned experience from millions of prior outcomes. This is most definitely done in other fields of AI.
  11. And again, then it is possible to conscioulsly decide to go no further with any more detailed determinations. But that is a conscious choice as well--and I can tell you the people I talked to that are in related fields are not indicating that they are stopping with such simple determinations. They do want to figure out if the thing is a person, a vehicle, an inanimate object, etc.
  12. That isn't true for sure. As I said above, the car needs to know not to brake full force for a tumbleweed; it needs to determine if a pothole is too deep to just speed over; And as I said, I am pretty sure that the fatal Tesla accident was caused by the program interpreting an object it should have avoided as one that was irrelevant.
  13. In fact, wasn't the widely fatal accident with a Tesla caused by exactly that? The system misinterpreted what exactly it was seeing and had determined that it wasn't the type of thing that needed to be avoided?
  14. I have no idea. You made up this challenge. I assume that I will have the data available, otherwise how do you want me to write a program for it?
  15. That isn't different than "don't hit the person" except that you are reducing the information even more to "don't hit the large object"--this would almost certainly be too little information for a successful program. They are absolutely trying to take into account if the object is likely a person or a thing--and to distinguish even further between different "things", such as a tumbleweed and a rock--with different priorities of avoiding either.
  16. Are we assuming the system has sensors that can make all these determinations and that someone at some point decided that these determinations are useful? (i.e. "mother", "newborn"?) Generally, ignoring these determinations, I will have 2 choices: 1. Program the system to NEVER apply more braking power than would be safe for the passengers 2. Always apply as much breaking power as necessary to not hit the person on the track and ignore what that may do to the passengers I WILL have to make that choice. But again, this would not be a neural network algorithm, if I made that choice. In a neural network algorithm, where the program eventually learns itself how much breaking power it applies in each case, it would have to learn this based on final outcomes (by past experience) and yes: That would probably be based on who and how many die or get hurt.
  17. Yes, you have to go back a number of posts, where I am saying that these comments apply to more traditional "expert system" types of programs. They are easier to discuss. However, you have something analogous for self-learning neural network algorithms: You need to set initial success parameters. They determine everything else. It basically cascades backwards from these initial parameters. For example, AlphaZero started out with the very simple parameter of "winning the game". Then it could set its own parameters by working backwards from there. In self-driving algorithms that are based on neural networks, setting the initial success parameters would likely not be as straightforward and will have to include parameters about what is considered "success" in case of fatal crashes.
  18. Yes, you could program the car with the instruction "don't hit the person". But then, if that is simply not an option (a person to the left, 2 to the right, 10 in front, 3 passengers in the car--all too close to brake in time) then--if you left it at that--your algorithm is now in an error condition. Will it do nothing? Brake as hard as it can even though that doesn't help? be stuck in an infinite loop as it calculates each option and gets a "no" each time because of the instruction "don't hit a person"?) Like I said, the programmer can choose to not program an option for this error condition, but that is a choice as well. As for "income potential", etc. Sure: That is not a good metric, nor may be "pregnant woman", "old man", etc. but "car passenger" (=paying customer) versus "other people" may well be one where there will be pressure to consider it. (as in the example of current SUVs built like tanks to protect the paying customer over the "other less relevant people"--sure we can pretend that this is not the reason we build them like this and advertise their safety potential to the paying customer...but that doesn't make it true.) There may be other metrics, once the sensors and software are smart enough to potentially consider them. Just imagine the pressure from "mothers-against-babies-killed-in-accidents.org"
  19. I completely agree with you on the trolley problem (actually I mentioned that in another thread.) I think it is contrived and it pretends to deal with certainties, when in reality there are none. A million things can happen between the bridge and the time the trolley hits the people, and it is impossible to know that the fat man will stop the train but your body won't. I think its best use is to help people get to the realization that morality really lives in a completely different place than utilitarian decisions based on cost-benefit equations. BUT: The situation for a programmer is a bit different: If I write ANY type of program I have to account for the failure conditions and program into the system what to do in those conditions. There is only ONE way to avoid that, and that is to consciously not program ANY specific parameters for these conditions and let the program fail in whatever the default way is, but this is like putting my hands in front of my face to cover my eyes, or sticking my head in the sand. It is a conscious decision, like any other programming decision would be...and it's probably the worst. If you are on your final approach at below 100ft and another canopy is flying straight at you (which for some reason you didn't notice before) you can leave your decision up to your in-the-moment reaction and intuition (although that probably is determined by previous training and experience as well--but most likely you didn't consider the situation consciously before.) But if you program a computer game for skydivers and this is a possible scenario in the game, you will definitely have programmed the decision parameters for this situation into the game. You HAVE to. There WILL be parameters. If "potential deaths" is NOT one of the parameters you use, then that is a conscious decision you made that will have certain consequences, just as much as if you decided to use this as one of the parameters. I'm not saying I know which option is better, just that these questions are real ones that the programmers face.
  20. I don't think that is relevant. It is always the trickiest part of a programming job to program for the cases that are VERY unlikely but not impossible. I can often finish the biggest part of the software that does everything that USUALLY should happen and then spend the same or even MUCH more time on catching the weird little exceptions that ALMOST never happen but really mess things up when they do. You have to program for these cases, there is no way around it. (And it's not about the autonomous car being at necessary at fault, it's about situations where the autonomous car still has some options for decisions, but where these options are limited by what happened before--no matter who (or what) is at fault.)
  21. Here is a good article: https://link.springer.com/article/10.1007/s10677-016-9745-2 It shows that, clearly AI designers, as well as philosophers are very much thinking about this and that yes: They HAVE to program into the algorithm, what decision metrics to use in the case of a crash.: "For these reasons, automated vehicles need to be programmed for how to respond to situations where a collision is unavoidable; they need, as we might put it, to be programmed for how to crash." On the other these metrics don't deal with certainties (of death, etc.) but with probabilities hand they do point out that the problem is therefore not quite analogous to the well-known (and often loosely quoted) "trolley problem"--so maybe that goes to your point billvon. Nevertheless, they are absolutely thinking about this, and are using a number of metrics to guide the decision making process in such a case. Another good quote from the article: "...we need to engage in moral reasoning about risks and risk-management. We also need to engage in moral reasoning about decisions under uncertainty."
  22. Actually, these kinds of decisions are already made by car designers--it doesn't require AI or software: When they build heavier and heavier SUVs to tout them as safer than other cars they are weighting safety towards the passengers inside the car, versus other users of the road outside: Increased mass always increases the energy in a crash and therefore cannot be safer. But it shifts the damage to the other participants in the crash, away from the passengers.--not to speak about roo bars (which certainly aren't meant for kangaroos, if you're not driving in Australia)
  23. I think they do: When you see a YouTube video of a plane landing on a highway, the pilot made a decision to endanger uninvolved drivers in order to save passengers (or himself if he is the only one) He may THINK that he can land without seriously injuring anyone on the road, but he does not know it. He weighted his risk assessment. He may not even have done it consciously--but for AI that has to be done more explicitly. (Sully must have gone through a similar decision when he landed in the Hudson--everything worked out, but had he hit a boat, or instead made a sudden move avoiding the boat and endangering the airplane, the effects of his decision would have been more visible)
  24. It has. These are from real examples from real programmers in the field. (Even though my specific example was a bit contrived, as ryoder pointed out) In a neural network AI, the question doesn't have to be answered directly, as it would in an expert system, but it will come up in an abstract way in the definition of "success" or in the parameters that guide the weighting system. (i.e: sucess=as few potential deaths as possible; or: x likely serious injuries = 1 death; or: give more weight to the passengers of the car than people outside, etc.)