So right after my Voodoo rant
I get an email ad from Waves
about their new SSL presets...

  back to main turd
 
As to my view on the current marketing turds out there - here' one of my rants on the whole Waves campaign for various presets to their plugins. Plugins that if you used an Ilok are totally useless now.
 
OK... Now I'm gettin' pissed
 
There was a campaign a while back concerning some well known engineer (we'll call him Mr. Lord Sphagnum Moss who seems like a great guy, great ears, great nose, etc)... it stated that these particular plugins will make you a muddafuddin star and your mixes as good as anything ever heard by mankind.
 
 
Hmmm...
 
 
Now, I can see the greatness here. Green Day, Ms. Crow
( http://www.campchaos.com/videos/campchaos-goblin.mov or
http://www.campchaos.com/blog-archives/2006/05/napster_bad_good_or_goblin.html )
 
But to think that having the particular EQ setting presets that were used with the particular mic, room, performer, particular day, post fellatio all nighter,  is anything more than interesting fodder for conversation is a bit misleading.
 
Anyone that works with any type of recording for very long realizes that making a record is a bit like the way Bob Ross paints. I recall reading a bio of him where the writer mentioned something like [paraphrase] "many a MILF tried to buy the 'Paint like Bob' ( http://www.bobross.com ) kits, only to find the 'happy trees' that they painted looked more like smashed earthworms".
 
Bob had thirty years of experience, sitting in Alaska, working a day job in the armed forces and painting 'cause he loved the process. His trees are happy 'cause he busted his ass for three decades. He learned how to effortlessly speak with his craft. He spent ungodly hours going over and over happy little trees.
 
To quote Butthead (watching a Stone Temple Pilots video) "It takes more than bears to make a great video Beavis" you can't assume anything any particular person does in a certain situation will be a panacea. You can learn from it, fer sure, but to think this will really help anymore than as an academic exercise is pure nonsense.
 
It takes experience to really be able to go into many a situation and be even moderately successful. I've worked with many a performer, even within the same day, same song different take, each situation is unique, sacred, whatever. EQ settings are just a small part and they're never the same. At least not to a decent engineer that doesn't have marketing turds stuck in his ears...
 
That's what makes music great; and, if you understand the Quincy Jones concept  -- (para-quote from George Martin's book Making Music) " ...no matter how much you do your homework, you have to leave enough room for the Lord to walk thru..."  -- what makes great recordings vs crap.
 
So again, market BS will probably win out; good luck to Waves (I really wish they'd spend more time on fixing their dang Winblows drivers for their DSP accelerators).
Gee, and we haven't even touched on the actual music, song, performance.  Think maybe that's what made the hit?  No, of course not. The aforementioned 14yr old girl really only buys the latest Shakira 'cause they used the Waves whiz-bang, MAKE YOU A STA' MUDDERFUKKER presets...
 
No wonder most of the stuff released now sucks...


Then a year later I see a post concerning Why Audio Quality Matters

From here - Now go to this site and watch the vid with your pinky up - http://philoctetes.org/Past_Programs/Deep_Listening_Why_Audio_Quality_Matters

Here's an edited  response to them as I watched this:

Ok... this is so full of it....

For instance the comments about 39:00 in to the vid....  talking about "3 dimensions..." and such with "panpots..."

Aural localization is based as much on head related time domain reflections - not amplitude alone. If one were able to put any type of meter (PPM, VU, etc...) on ones ears, a point source in an anechoic chamber  at a 45d angle to the front of the listener would show fairly the same amplitudes - the brain localizes primarily by the  time difference that each ear "hears"

From Wikipedia http://en.wikipedia.org/wiki/Interaural_time_difference -
"Experiments conducted by Woodworth (1938) tested the duplex theory by using a solid sphere to model the shape of the head and measuring the ITDs as a function of azimuth for different frequencies. The model used had a distance between the 2 ears of approximately 22-23cm. Initial measurements found that there was a maximum time delay of approximately 660μs when the sound source was placed at directly 90° azimuth to one ear. This time delay correlates to a sound input with a frequency of 1500Hz. The results concluded that when a sound played had a frequency less than 1500Hz the wavelength is greater than the time delay between the ears. Therefore there is a phase difference between the sound waves entering the ears providing acoustic localisation cues. With a sound input with a frequency closer to 1500Hz the wavelength of the sound wave is similar to the natural time delay. Therefore due to the size of the head and the distance between the ears there is a reduced phase difference so localisations errors start to be made. When a high frequency sound input is used with a frequency greater than 1500Hz, the wavelength is shorter than the distance between the 2 ears, a head shadow is produced and ILD provide cues for the localisation of this sound."

Other studies have suggested even higher intervals for ILD emulation in stereo recording when combined with IAD (interaural level difference) especially when determining the sound stage and distance to the emulated object.

Then there's the Haas Effect (from wikipedia http://en.wikipedia.org/wiki/Precedence_effect ) -
The Haas effect is a psychoacoustic effect related to a group of auditory phenomena known as the Precedence Effect or law of the first wave front. These effects, in conjunction with sensory reaction(s) to other physical differences (such as phase differences) between perceived sounds, are responsible for the ability of listeners with two ears to accurately localize sounds coming from around them.

When two identical sounds (i.e., identical sound waves of the same perceived intensity) originate from two sources at different distances from the listener, the sound created at the closest location is heard (arrives) first. To the listener, this creates the impression that the sound comes from that location alone due to a phenomenon that might be described as "involuntary sensory inhibition" in that one's perception of later arrivals is suppressed.

The Haas effect occurs when arrival times of the sounds differ by up to 30–40 ms. As the arrival time (in respect to the listener) of the two audio sources increasingly differ beyond 40 ms, the sounds will begin to be heard as distinct; in audio-engineering terms the increasing time difference is described as a delay, or in common terms as an echo.

The Haas effect is often used in public address systems to ensure that the perceived location and/or direction of the original signal (localization) remains unchanged. In some instances, usually when serving large areas and/or large numbers of listeners, loudspeakers must be placed at some distance from a stage or other area of sound origination. The signal to these loudspeakers may be electronically or otherwise delayed for a time equal to or slightly greater than the time taken for the original sound to travel to the remote location. This serves to ensure that the sound is perceived as coming from the point of origin rather than from a loudspeaker that may be physically nearer the listener. The level of the delayed signal may be up to 10 dB louder than the original signal at the ears of the listener without disturbing the localization.

The Haas effect also explains why it is possible to simulate a complete complex audio field from only two sound sources in stereophonic and other binaural audio systems. It is also utilized in the generation of more sophisticated audio effects by devices such as matrix decoders in surround sound technologies, such as Dolby Pro Logic.

For a time in the 1970s, audio engineers used the Haas effect to simulate that a sound was coming from a single speaker in a stereo sound system, when it was actually coming from both. This was to compensate for the fact that a sound coming from a single speaker would be 3 dB lower in volume than a sound coming from both. This technique has problems if the stereo sound is mixed to mono, as a comb filter effect would occur. Also, the aesthetics of sound mixing changed to exclude the use of solo instruments emanating from a single corner of the sound field in most popular recordings.

< style="font-style: italic;">The effect is named after Helmut Haas who described the effect in his doctoral dissertation "Über den Einfluss eines Einfachechos auf die Hörsamkeit von Sprache" to the University of Göttingen, Germany. An English translation was published in December, 1949 as The Influence of a Single Echo on the Audibility of Speech.[1]


Using just a pan pot [alone] is a poor representation of stereo imaging...

(I recall reflecting planes placed in some control rooms being called "Haas kickers" - they since fell out of vogue...)

And I wonder if they even know what the effect of tangential error was on the turntable they used, let alone any ELF modulation due to turntable motion. A DC-coupled Lissajous pattern would show that...

Dang - a lot of what they talk about here is not correct in a strict scientific sense.

As to hearing all these differences between digital vs analog, There's a guy at Linear Tech, and actually (see below)


As to Level Wars, discussed at 31:00, please note my Turn it up Mudderfudder

- the reasons for high RMS levels, look at my article.
- as to his statement about radio - I was a broadcast engineer, we'd get MASSIVE FINES from the FCC if we peaked over 102% modulation.

then later I sent:

Wow... There was so much partial/misinformation in that Dec 6th session that I failed to finish my train of thought about the Linear Tech guy....

There was an FAE I met during the design of one of my products (http://www.ajawamnet.com/ajawam3/pat6208266.pdf) that worked at Linear Technology Alan Rich - the company that makes a lot of the semiconductors like those in the products your panel used.

If I recall he mentioned that he was one of the first few FAE's , he mentioned how tough it was to get in. He mentioned that during the interview process, the candidate is presented with a large schematic of one of their IC's and has to walk the interviewer through the circuitry.

So years later, I was working on a design for Lane Poor, a well recognized manufacturer of very high end pick ups. We were looking for a replacement to an Analog Device product, an opamp that became scarce. I called Alan and mentioned my dilemma. He said that in fact LT had some fairly high quality opamps but he also mentioned that for most audio applications, the Analog Device products were superior to the ones LT offered - and not in a voodoo sense, actual measurable parameters that are typical in audio product topologies.

I mentioned that yes indeed I had seen and heard (very subjective as to anyone's perception so I rarely go by that) what I thought was a discernible difference. In fact electrically it seemed he was right.

Now way back when, there was a guy named George Massenburg, the inventor of the modern parametric EQ, based somewhat on Sallen-Key/state variable filters. At the time everyone including Alan Rich's friend Walter Jung (a well known engineer in his own right) dismissed George's claim as to being able to have a fully parametric EQ with constant Q.

But indeed George did it. And it became very useful for audio production. Even Walter eventually included it in his now famous Opamp Cookbook series.

Anyway, during the conversation about opamps, I asked Alan if he ever met anyone that really, really could pass blind tests as to sound quality. He mentioned that one time, he was present during a listening test where Walter Jung was able to tell the difference in an opamp silicon die packaged in ceramic vs one packaged in epoxy (the typical IC package technology you see used for most semiconductors)

He was at a loss to explain it, as am I. Maybe it was possible, maybe there were other factors in the blind test that contributed to this. For instance most of the audiophile - golden ears tests are conducted in a very limited scientific way - typically, the zillions of things that can affect inter-equipment coupling are ignored, and in my opinion, render these tests totally useless.

Again I implore you to read, my market turd rants: http://www.ajawamnet.com/ajawamnet/marketturd.htm .

Put it up for discussion.... put it in the hands of these experts. I would love to see one, just one deny that even the most technically horrible recording cannot have value if the art is truly there. Look at Michelle Shocked's story.
Look at Tone Loc.

Then also look at what's really important.

Take the recording of the most technically stupendous recording, judged by all these golden ears, and compare that to Yo Yo Ma sitting in front of them playing Mozart.

Now you get the picture - read my State of the Music Biz debate/article a few times....

The human - the music itself. Not the feeble attempt at timeshifting performances...




back to main turd