I'm trying to better understand the differences between analog, digital, continuous, and discrete signals in the context of signal processing. Here's what I've gathered so far, but I’d like some clarification:
What I Understand Analog vs. Digital:
These depend on the y-axis values of the signal: If the signal can take any value on the y-axis, it’s an analog signal. If the signal takes only specific values (e.g., 0 and 1), it’s a digital signal.
Continuous vs. Discrete:
These depend on the x-axis values: If the signal is defined for all points on the x-axis, it’s a continuous signal. If it’s only defined at specific points (e.g., integers), it’s a discrete signal. My Current Categorization Based on this, I think signals can be categorized as:
Continuous-Analog: Signals like sine waves where both x and y are continuous.
Continuous-Digital: Signals with continuous x-values but discrete y-values, like square waves.
Discrete-Digital: Signals where both x and y are discrete, such as a sampled digital signal.
I’ve noticed that many people equate:
Continuous signals = Analog and Discrete signals = Digital
(i think its not correct in general)
Is it possible to have a "Discrete-analog " signal?
For example, where the x-axis is discrete but the y-axis is continuous. If so, how would such a signal look in practice? Does my understanding of the categories seem accurate?
Are there any nuances I’m missing in how these types of signals are defined or classified? Thanks for your insights! I’d appreciate examples or corrections to my understanding. btw idk how to write question stack exchange so plz adjust. and help me to write questions in stack exchange .
