Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!

Who's Online (6)

Please review the site Rules, Terms of Service, and Privacy Policy at your convenience. Rules, TOS, Privacy
Get familiar with the reaction system: Introducing the Reaction System

phase shift issues and thoughts

(attached pic) Phase shift with a broad range driver w/high-shelf eq (for mid and high freq duty)


While phase affects signal timing relative to original signal, it’s not relative to a displaced driver over its frequency range, so thinking indiscernible by listening (as its not destructive to another, there is no other).

Both systems (broad range and multi-drivers with crossovers) exhibit constructive and destructive interference.  The former as a result of frequency wavelength relative to driver circumference, and the latter by driver separation (with its axes of nulls determined by crossover configuration).


So ‘its always somethin’…   

It’s how we each set out our project’s goals and balance our perceived results.  And sometimes the best offense is a great defense (i.e. attenuating the negatives).

So I’m pondering the affects of removing low frequency information from the higher frequencies (via a crossover) and playing them through independent drivers...  I.e. thinking of a single membrane microphone recording the complex music signal, being a net composition of small waves on bigger waves.  Does not removing the small waves without reference to the larger reference waves (as recorded at the microphone) cause added phase issues?  (Sorry, kinda hard to describe)

 

Thoughts?  

Comments

  • edited January 2020
    tajanes said:
    So I’m pondering the affects of removing low frequency information from the higher frequencies (via a crossover) and playing them through independent drivers...  I.e. thinking of a single membrane microphone recording the complex music signal, being a net composition of small waves on bigger waves.  Does not removing the small waves without reference to the larger reference waves (as recorded at the microphone) cause added phase issues?  (Sorry, kinda hard to describe)

     

    Thoughts?  


    What happens when you play a high frequency signal and a low frequency signal from the same diaphragm, is that the high frequency signal "rides the wave" of the low frequencies. This is often referred to as doppler distortion, and is a form of intermodulation. This was recently brought up in the Purifi thread, you can read a bit about it on the tech section of Purifi's site.

    In a speaker, the displacement can be significant, the example at Purifi uses +/- 12.5mm, in a microphone however you have very little movement at all, so there's no real intermodulation issue here if you compare diaphragm displacement vs wavelength.

    In any case, if you go listen to the test at Purifi, the phase distortion caused by the doppler effect is much less of an issue than the amplitude distortion caused by a non-linear driver BL.
    I'm not deaf, I'm just not listening.
  • edited January 2020
    Yes, but ??? since the music was recorded as such (as one composite), its actually the separation that causes a dislocated affect by not 'riding the recorded wave', quite the opposite (I'd think). Think of how, at least live music, is recorded- onto a single diaphragm mic.
    So, isn't some of the small wave's reference (as recorded) now missing?

    And in the example article,  its adding two phased out signals, somewhat different than a single comprehensive music recording.
  • edited January 2020
    I'm not sure I understand what you're trying to convey. Music isn't recorded by a single mic in the centre of a recording booth, often each instrument is recorded separately by multiple microphones, mixed, processed, etc.

    If you're trying to refer to the sound localization in the phase data, you maybe should look at binaural recording, since you need 2 mics to determine a phase difference to create a true sense of sound location. Separating high and low frequencies really has nothing to do with this. 

    By recording using a single mic, everything recorded can be treated as "minimum phase", with the only timing references being the delays between different sounds reaching the mic due to distance.
    I'm not deaf, I'm just not listening.
  • edited January 2020

    True yes, and each instrument / voice has a broad multi frequency dynamic range…  but so as not to add layer upon layer, to see the forest but for the trees (focusing on the issue of lack of continuity of signals via separating frequency presentation through displaced drivers) I simplified as a single mic recording.  And further unique issues in playback where speakers may be slicing and dicing among 2, 3 or more drivers (but its the smaller frequencies into the 1 to 5kHz range, and up, that potentially are most problematic).

    Sorry, not so easy for me to explain in writing (I'm a visual)

     

  • Linear and non-linear distortion, directivity index, RT60, etc. Focus on the things to have control over, forget about the things you don't.
    rjj45
    I'm not deaf, I'm just not listening.
  • edited January 2020

    Ha, if only.  

    Unfortunately the mirror doesn’t let me forget I’m getting older, for the rest I can find solutions...

Sign In or Register to comment.