One thing I think that is missing from the answers here, but I think is essential for understanding Chi-Squared ($\chi^2$) tests is how the data are actually collected. Depending on the answer to this question, you end up carrying out different tests (by name) that describe the end goal of the analysis (even though the final answer would be the same).
In this type of categorical data analysis, $\chi^2$ tests generally fall into one of three tests as described below.
1. Pearson's $\chi^2$ test for Goodness of Fit
Data Collection Setup: This test consists of data that is sampled from a population and then classified into a single level of some other characteristic.
Goal: You have a model that is pre-specified in which you think the proportion/probabilities/percentages occur in the population.
Example: You might be interested in testing the eye color of offspring of parents with blue and brown eyes. For simplicity, assume the only outcomes are blue, brown, and green. You have a model specified a priori of the ratios of the eye color of the offspring to be 1:2:1. This would translate to a model and a test of the null hypothesis:
$$
H_0: p_{blue}=1/4, p_{brown}=1/2, p_{green}=1/4
$$
2. $\chi^2$ test of Homogeneity for a Contingency Table with One Margin Fixed**
Data Collection Setup: From each group of interest in the population you take a random sample of a predetermined, fixed size, look at the resulting characteristic/response category and classify each observation into a single response category. This forms a contingency table where one classification refers to the population and the other to the characteristic/response category.
Goal: The objective is to test whether the populations are similar or homogeneous with respect to their individual cell probabilities. This translates to determining if the observed proportions/probabilities in each characteristic/response category are nearly the same for each population.
Example: You might be interested in testing whether two drugs A and B, have similar proportions of people reporting mild, moderate, or severe side effects. You decide, again, a priori that you will sample, say, 100 patients who received drug A and then separately and independently (through stratified random sampling) randomly sample 400 patients from those who received drug B. You are interested in testing whether there are signficant differences in the patients who reported mild, moderate, and severe symptoms for each drug. This would translate to a model and a test of the null hypothesis:
$$\begin{aligned}
H_0: p_{DrugA-Mild}&=p_{DrugB-Mild}, \\
p_{DrugA-Moderate}&=p_{DrugB-Moderate},\\
p_{DrugA-Severe}&=p_{DrugB-Severe}
\end{aligned}
$$
3. $\chi^2$ test of Independence for a Contingency Table with Neither Margin Fixed
Data Collection Setup: This test consists of data that is sampled from a population of interest, and then the data is simultaneously classified into two characteristics/response categories after observing the data (i.e., neither margin is set as fixed a priori).
Goal: This result of the data collection process in a contingency table in which both marginal totals are random. You are interested in determining whether the two characteristics/response categories were seemingly generated through an independent process or whether certain levels of one characteristic/response category tends to be associated or contingent on the levels of the other characteristics/response categories.
Example: You might be interested in testing whether a being Christian or Jewish alters their preference for a Conservative, Independent, or Liberal candidate for office. You conduct a survey, examine your data, and then cross-classify each respondent as being Christian/Jewish and simultaneously, which candidate they prefer for an upcoming election. If the Probability of being Christian and supporting the Conservative Candidate are independent, then
$$
P(\text{Christian & Conservative Preference}) = P(Christian) \times P(\text{Conservative Preference})
$$
Using this logic for all the cross-classifications, then directly translates to the null hypothesis to be tested which would simply be, $H_0$:
$\text{Each cell probability equals the product of the corresponding row and column marginal probabilities}$
and would simply be tested with:
$$
\chi^2 = \sum_{cells}{(O-E)^2\over{E}}
$$
with degrees of freedom equal to the number of rows $r$ minus 1 times the number of columns $c$ minus 1. This is because the number of degrees of freedom is initially $rc-1$ since there are $rc$ cells into which a single random sample can be classified, but from this, we must subtract the number of estimated parameters, which is $(r-1)(c-1)$ because there are $r-1$ parameters among the row margins and $c-1$ parameters among the column margins. Therefore, the total degrees of freedom in this case is simply (which by algebra is equal to the degrees of freedom in the other tests):
$$\begin{aligned}
rc-1 - (r-1) - (c-1)&=(r-1)(c-1) \\
&=(\text{Number of Rows} - 1) \times (\text{Number of Columns} - 1)\\
\end{aligned}
$$
Last, but not Least: A Warning About Your Data
In the example data you provided, you show a contingency table in which you have at least two expected cell probabilities of less than 5, which is generally considered a violation of the the assumptions of the Chi-Squared test and your results may not be valid. In this case, you should consider collapsing categories to increase the expected cell probabilities or conduct an alternative test like Fisher's Exact Test or perform a simulation.