Sorry, I wasn't clear on whether you or davelepka was making the "shouldn't be able to test out" argument. With all the different quotes and replies, I think I may have confused your 2 points of view.reply]
In reply to:
If not, please explain what (other than your own bias) you're basing your argument on????
I've been a full-time professional educator for over 35 years. I do know something about courses, teaching and testing.[/reply
This explains your bias (sorry, a cheap shot, but I couldn't resist), but to my 1st point; have you ever (especially since the introduction of the ISP, definitely post introduction of the old BIC) taken any USPA instructional courses? What you seem to think "testing out" means is not what it takes to challenge a USPA ratings course. Passing the written test is the smallest part of it. The biggest part is the teaching and evaluation of student performance, and it must be done following a defined format using clearly defined techniques. If you were to take a rating course from me, I would presume that I didn't have to teach you how to teach (positive vs. negative reinforcement, time management, lesson plans, braking down a topic into teachable portions, etc) and would certainly allow you to "test out" of this portion of the class. If I did have to teach you this, that wouldn't say a lot for your 35 years of experience. Your point seems to be that you, as a professional educator, should have to listen to me spend the better part of a day explaining basic educational theory to someone like you. To me, this would be a waste of time for both of us.
This is where the methodology of USPA really is important; understanding the whole/part/whole, repetitive analysis, and most importantly....consistency come into play. If someone can satisfactorily "test out" of the ground evaluations...then I agree...they should not have to sit through the day(s) of training involved. I'd be surprised if many could test out without the contextual training that the courses provide.
I've been a full-time professional educator for over 35 years. I do know something about courses, teaching and testing.
Shame you haven't shared any relevant information on the subject here.
Who are you too question him? He has shared relevant info. He is a real teacher! Not one of them fake/wannabe instructors/coaches than will never know how to truly TEACH! Sorry I must have went into Kallend World for a minute.....
look at someone like Craig Girrard...Why shouldn't he be able to take a Coach Course on Monday/Tuesday/Wed, and an AFFI course on Friday-Friday? With quadruple the jumps and instructional experience of most of the AFFI/E's out there, would you agree he's kind of in a different boat?
No. He should take the same course as any other cantidate, and follow the same guidelines. If he doesn't like it, he should have thought of that 10,000 jumps ago, and got his rating then when he had 800 jumps or so like everyone else.
Despite all of his accomplishments, if they don't include the USPA mandated experience for becoming an instructor, than he's not qualified. He may be the head of the class at his coach course, and he may sail through the AFF cert course, but he needs to follow the rules like everyone else. All of his experience is an asset for sure, but if it doesn't include what the USPA says it should include to be an instructor, then he's not qualified to be a USPA rated instructor.
I don't agree with you at all. He should be able to challenge any of the courses due to skill, time in sport, and ability to teach.
I had over 3000 jumps when I finally chose to become an instructor. If I had to do a year of coaching, something I was already proficient at, I would have been screwed. I took that dumbest requirement (BIC) and went to the AFF course. The course was not a breeze but once I finished, real students had a good instructor to work with.
Note, I do not believe people with low time or no practical knowledge should be able to simply get a coach rating (questionable usefulness at best) and then become an AFF instructor the next week.
If we used your example in other areas of *employment* highly qualified and experienced people would not be hired because someone with a degree and very little practical knowledge, will win.
I'm sorry sir, your qualifications and experience is astounding however you do not have an A+ certificate. (A+ iis a very low level qualification)
(This post was edited by hookitt on Mar 8, 2010, 4:18 PM)
6. Overconfidence – People uniformly tend to overestimate their abilities and knowledge. A simple example is to ask 100 people if they think they are above, at, or below average intelligence. The VAST majority will reply that they are above average intelligence. Clearly only 50% can be.
Wrong, you are caught in a trap of your own making.
If I ask the first 100 people I meet after typing this, I can pretty much guarantee they are all above average intelligence, because my sample is not random (I'm sitting in an engineering department at a university). Additionally, a sample size of 100 is not nearly enough to guarantee an exact 50% result even if the sample is truly taken at random from the population; the margin of error would be nearly 10% (95% confidence limit).
No I'm not wrong you're introducing a new issue called sample bias. Of course a proper random sample should be used. It's a separate and very real issue.
Beyond that I'm trying to make a general point and 100 was an easy number for people to visualize. If you like we can just add a "0" to it. For illustrative purposes for a general audience it doesn't change the point. And of course referencing statistical tables for proper samples sizes given confidence intervals etc is part of good analysis, it's just beyond the scope of this conversation beyond my commenting to be ware of small sample sizes.
The point still stands that it is a known and testable fact that people are overconfident in their abilities and decision making and is a bias that one needs to be careful about when using opinion to argue a point vs testable data.
Your nit picking aside I would think we would both agree that the issue of quality of instruction and changes over time can be resolved better with verifiable data properly analyzed as opposed to opinion and shouting that is the norm on the board.
Your point seems to be that you, as a professional educator, should have to listen to me spend the better part of a day explaining basic educational theory to someone like you. To me, this would be a waste of time for both of us.
That is basically it, and it is (more or less) the FAA's position on certifying flying instructors.
Incidentally, there's a lot of non-classroom stuff (labs, projects..) going on in an engineering curriculum, ALL of which has to be evaluated by the prof.
(This post was edited by kallend on Mar 9, 2010, 3:16 PM)