
参数设置:

问题是:此时到底应不应该换门?
让我们把初始选择分成以下两种情形,然后看在这两种情形下,「换门」和「不换门」分别导致的结果。
根据上面的分析,可以汇总成下表:
表格
| 初始情况 | 不换门时中奖率 | 换门时中奖率 | 对应概率 |
|---|---|---|---|
| 首选中大奖 (1/3) | 赢 (1/3) | 输 (0) | 1/3 情况下 |
| 首选中山羊 (2/3) | 输 (0) | 赢 (2/3) | 2/3 情况下 |
| 整体中奖概率 | 1/3 | 2/3 |
由上表可见,「换门」能让你中奖的概率从原本的 1/3 提高到 2/3。因此从数学层面分析,要想最大化获奖概率,就应该在主持人打开一扇山羊门后,选择换另一扇门。如果你继续坚守原先的选择,可能性只有 1/3。
因此,蒙提霍尔问题的正确策略是:「应该换」。
setInterval在特定的时间内重复执行某个任务。用clearInterval来执行任务的终止。
const callback = () =>
console.log("Executing callback");
const intervalID = setInterval(callback, 200);
setTimeout(() => {
clearInterval(intervalID);
console.log("setInterval() was cleared");
},1000);
5 Executing callback
setInterval() was cleared
2. 下载并解压Panoramic。需要解压到\wsl.localhost\Ubuntu-22.04\home\你的名字中。
Release v1.3.0 · MayroseLab/Panoramic · GitHub

3. 打开Ubuntu,进入Panoramic-1.3.0目录下。
cd Panoramic-1.3.0/
4. 输入下列命令安装:
conda env create -f conda_env/snakemake.yml
5. 运行环境。
conda activate snakemake-panoramic
设$f(x)=x^2$,其导数为$f'(x)=2x^{2-1}=2x$。
# 初始化参数
x <- 5
learning_rate <- 0.1
num_iterations <- 100
# 梯度下降算法
for (i in 1:num_iterations) {
gradient <- 2 * x # 计算梯度
x <- x - learning_rate * gradient # 更新参数
cat("Iteration", i, ": x =", x, ", f(x) =", x^2, "\n")
}
cat("Minimum value of f(x) is approximately at x =", x, "\n")
梯度下降法是一种迭代优化算法,用于寻找函数的最小值。它在机器学习和深度学习中被广泛用于优化模型参数,以最小化损失函数。
Iteration 1 : x = 4 , f(x) = 16
Iteration 2 : x = 3.2 , f(x) = 10.24
Iteration 3 : x = 2.56 , f(x) = 6.5536
Iteration 4 : x = 2.048 , f(x) = 4.194304
Iteration 5 : x = 1.6384 , f(x) = 2.684355
Iteration 6 : x = 1.31072 , f(x) = 1.717987
Iteration 7 : x = 1.048576 , f(x) = 1.099512
Iteration 8 : x = 0.8388608 , f(x) = 0.7036874
Iteration 9 : x = 0.6710886 , f(x) = 0.45036
Iteration 10 : x = 0.5368709 , f(x) = 0.2882304
Iteration 11 : x = 0.4294967 , f(x) = 0.1844674
Iteration 12 : x = 0.3435974 , f(x) = 0.1180592
Iteration 13 : x = 0.2748779 , f(x) = 0.07555786
Iteration 14 : x = 0.2199023 , f(x) = 0.04835703
Iteration 15 : x = 0.1759219 , f(x) = 0.0309485
Iteration 16 : x = 0.1407375 , f(x) = 0.01980704
Iteration 17 : x = 0.11259 , f(x) = 0.01267651
Iteration 18 : x = 0.09007199 , f(x) = 0.008112964
Iteration 19 : x = 0.07205759 , f(x) = 0.005192297
Iteration 20 : x = 0.05764608 , f(x) = 0.00332307
Iteration 21 : x = 0.04611686 , f(x) = 0.002126765
Iteration 22 : x = 0.03689349 , f(x) = 0.001361129
Iteration 23 : x = 0.02951479 , f(x) = 0.0008711229
Iteration 24 : x = 0.02361183 , f(x) = 0.0005575186
Iteration 25 : x = 0.01888947 , f(x) = 0.0003568119
Iteration 26 : x = 0.01511157 , f(x) = 0.0002283596
Iteration 27 : x = 0.01208926 , f(x) = 0.0001461502
Iteration 28 : x = 0.009671407 , f(x) = 9.35361e-05
Iteration 29 : x = 0.007737125 , f(x) = 5.986311e-05
Iteration 30 : x = 0.0061897 , f(x) = 3.831239e-05
Iteration 31 : x = 0.00495176 , f(x) = 2.451993e-05
Iteration 32 : x = 0.003961408 , f(x) = 1.569275e-05
Iteration 33 : x = 0.003169127 , f(x) = 1.004336e-05
Iteration 34 : x = 0.002535301 , f(x) = 6.427752e-06
Iteration 35 : x = 0.002028241 , f(x) = 4.113761e-06
Iteration 36 : x = 0.001622593 , f(x) = 2.632807e-06
Iteration 37 : x = 0.001298074 , f(x) = 1.684997e-06
Iteration 38 : x = 0.001038459 , f(x) = 1.078398e-06
Iteration 39 : x = 0.0008307675 , f(x) = 6.901746e-07
Iteration 40 : x = 0.000664614 , f(x) = 4.417118e-07
Iteration 41 : x = 0.0005316912 , f(x) = 2.826955e-07
Iteration 42 : x = 0.000425353 , f(x) = 1.809251e-07
Iteration 43 : x = 0.0003402824 , f(x) = 1.157921e-07
Iteration 44 : x = 0.0002722259 , f(x) = 7.410694e-08
Iteration 45 : x = 0.0002177807 , f(x) = 4.742844e-08
Iteration 46 : x = 0.0001742246 , f(x) = 3.03542e-08
Iteration 47 : x = 0.0001393797 , f(x) = 1.942669e-08
Iteration 48 : x = 0.0001115037 , f(x) = 1.243308e-08
Iteration 49 : x = 8.920298e-05 , f(x) = 7.957172e-09
Iteration 50 : x = 7.136238e-05 , f(x) = 5.09259e-09
Iteration 51 : x = 5.708991e-05 , f(x) = 3.259258e-09
Iteration 52 : x = 4.567193e-05 , f(x) = 2.085925e-09
Iteration 53 : x = 3.653754e-05 , f(x) = 1.334992e-09
Iteration 54 : x = 2.923003e-05 , f(x) = 8.543948e-10
Iteration 55 : x = 2.338403e-05 , f(x) = 5.468127e-10
Iteration 56 : x = 1.870722e-05 , f(x) = 3.499601e-10
Iteration 57 : x = 1.496578e-05 , f(x) = 2.239745e-10
Iteration 58 : x = 1.197262e-05 , f(x) = 1.433437e-10
Iteration 59 : x = 9.578097e-06 , f(x) = 9.173994e-11
Iteration 60 : x = 7.662478e-06 , f(x) = 5.871356e-11
Iteration 61 : x = 6.129982e-06 , f(x) = 3.757668e-11
Iteration 62 : x = 4.903986e-06 , f(x) = 2.404908e-11
Iteration 63 : x = 3.923189e-06 , f(x) = 1.539141e-11
Iteration 64 : x = 3.138551e-06 , f(x) = 9.850502e-12
Iteration 65 : x = 2.510841e-06 , f(x) = 6.304321e-12
Iteration 66 : x = 2.008673e-06 , f(x) = 4.034765e-12
Iteration 67 : x = 1.606938e-06 , f(x) = 2.58225e-12
Iteration 68 : x = 1.28555e-06 , f(x) = 1.65264e-12
Iteration 69 : x = 1.02844e-06 , f(x) = 1.05769e-12
Iteration 70 : x = 8.227523e-07 , f(x) = 6.769213e-13
Iteration 71 : x = 6.582018e-07 , f(x) = 4.332296e-13
Iteration 72 : x = 5.265615e-07 , f(x) = 2.77267e-13
Iteration 73 : x = 4.212492e-07 , f(x) = 1.774509e-13
Iteration 74 : x = 3.369993e-07 , f(x) = 1.135686e-13
Iteration 75 : x = 2.695995e-07 , f(x) = 7.268387e-14
Iteration 76 : x = 2.156796e-07 , f(x) = 4.651768e-14
Iteration 77 : x = 1.725437e-07 , f(x) = 2.977131e-14
Iteration 78 : x = 1.380349e-07 , f(x) = 1.905364e-14
Iteration 79 : x = 1.104279e-07 , f(x) = 1.219433e-14
Iteration 80 : x = 8.834235e-08 , f(x) = 7.804371e-15
Iteration 81 : x = 7.067388e-08 , f(x) = 4.994798e-15
Iteration 82 : x = 5.653911e-08 , f(x) = 3.196671e-15
Iteration 83 : x = 4.523128e-08 , f(x) = 2.045869e-15
Iteration 84 : x = 3.618503e-08 , f(x) = 1.309356e-15
Iteration 85 : x = 2.894802e-08 , f(x) = 8.37988e-16
Iteration 86 : x = 2.315842e-08 , f(x) = 5.363123e-16
Iteration 87 : x = 1.852673e-08 , f(x) = 3.432399e-16
Iteration 88 : x = 1.482139e-08 , f(x) = 2.196735e-16
Iteration 89 : x = 1.185711e-08 , f(x) = 1.405911e-16
Iteration 90 : x = 9.485688e-09 , f(x) = 8.997828e-17
Iteration 91 : x = 7.58855e-09 , f(x) = 5.75861e-17
Iteration 92 : x = 6.07084e-09 , f(x) = 3.68551e-17
Iteration 93 : x = 4.856672e-09 , f(x) = 2.358727e-17
Iteration 94 : x = 3.885338e-09 , f(x) = 1.509585e-17
Iteration 95 : x = 3.10827e-09 , f(x) = 9.661344e-18
Iteration 96 : x = 2.486616e-09 , f(x) = 6.18326e-18
Iteration 97 : x = 1.989293e-09 , f(x) = 3.957286e-18
Iteration 98 : x = 1.591434e-09 , f(x) = 2.532663e-18
Iteration 99 : x = 1.273147e-09 , f(x) = 1.620905e-18
Iteration 100 : x = 1.018518e-09 , f(x) = 1.037379e-18
> cat("Minimum value of f(x) is approximately at x =", x, "\n")
Minimum value of f(x) is approximately at x = 1.018518e-09
观赏鱼最常见的品类之一就是孔雀鱼。孔雀鱼的雌鱼在受精后,不会像其他鱼类那样,把卵排除体外,而是在体内孵化。这种孵化方式称之为卵胎生。胎生和卵生的最大区别,不在于是否体内孵化,而在于是否从胎盘汲取营养。孔雀鱼的卵在体内独立发育,不汲取母体的营养,因此不是胎生。卵胎生是一种介于卵生和胎生的生殖方式。
apply函数是效率极高的函数,尤其在对大矩阵进行运算时尤其高效。
# 创建一个矩阵
matrix_data <- matrix(1:9, nrow = 3, byrow = TRUE)
print("Original Matrix:")
print(matrix_data)
# 使用 apply 函数对每一行求和
row_sums <- apply(matrix_data, 1, sum)
print("Sum of each row:")
print(row_sums)
# 使用 apply 函数对每一列求和
column_sums <- apply(matrix_data, 2, sum)
print("Sum of each column:")
print(column_sums)
| 人群\叫法 | 行 | 列 |
| 统计学家 | 观测(observation) | 变量(variable) |
| 数据分析师 | 记录(record) | 字段(field) |
| 机器学习or数据挖掘 | 示例(example) | 属性(attribute) |