Skip to content
Snippets Groups Projects
Commit 7bd35857 authored by Mathy Vanhoef's avatar Mathy Vanhoef Committed by Yang Yingliang
Browse files

mac80211: prevent mixed key and fragment cache attacks


stable inclusion
from linux-4.19.193
commit 76ffc27967211afba6f0045ac840e7027fbeefcf
CVE: CVE-2020-24587, CVE-2020-24586

--------------------------------

commit 94034c40ab4a3fcf581fbc7f8fdf4e29943c4a24 upstream.

Simultaneously prevent mixed key attacks (CVE-2020-24587) and fragment
cache attacks (CVE-2020-24586). This is accomplished by assigning a
unique color to every key (per interface) and using this to track which
key was used to decrypt a fragment. When reassembling frames, it is
now checked whether all fragments were decrypted using the same key.

To assure that fragment cache attacks are also prevented, the ID that is
assigned to keys is unique even over (re)associations and (re)connects.
This means fragments separated by a (re)association or (re)connect will
not be reassembled. Because mac80211 now also prevents the reassembly of
mixed encrypted and plaintext fragments, all cache attacks are prevented.

Cc: stable@vger.kernel.org
Signed-off-by: default avatarMathy Vanhoef <Mathy.Vanhoef@kuleuven.be>
Link: https://lore.kernel.org/r/20210511200110.3f8290e59823.I622a67769ed39257327a362cfc09c812320eb979@changeid


Signed-off-by: default avatarJohannes Berg <johannes.berg@intel.com>
Signed-off-by: default avatarGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
Reviewed-by: default avatarYue Haibing <yuehaibing@huawei.com>
Reviewed-by: default avatarXiu Jianfeng <xiujianfeng@huawei.com>
Signed-off-by: default avatarYang Yingliang <yangyingliang@huawei.com>
parent 7796a592
No related branches found
No related tags found
No related merge requests found
......@@ -100,6 +100,7 @@ struct ieee80211_fragment_entry {
u8 rx_queue;
bool check_sequential_pn; /* needed for CCMP/GCMP */
u8 last_pn[6]; /* PN of the last fragment if CCMP was used */
unsigned int key_color;
};
......
......@@ -653,6 +653,7 @@ int ieee80211_key_link(struct ieee80211_key *key,
struct ieee80211_sub_if_data *sdata,
struct sta_info *sta)
{
static atomic_t key_color = ATOMIC_INIT(0);
struct ieee80211_local *local = sdata->local;
struct ieee80211_key *old_key;
int idx = key->conf.keyidx;
......@@ -688,6 +689,12 @@ int ieee80211_key_link(struct ieee80211_key *key,
key->sdata = sdata;
key->sta = sta;
/*
* Assign a unique ID to every key so we can easily prevent mixed
* key and fragment cache attacks.
*/
key->color = atomic_inc_return(&key_color);
increment_tailroom_need_count(sdata);
ieee80211_key_replace(sdata, sta, pairwise, old_key, key);
......
......@@ -127,6 +127,8 @@ struct ieee80211_key {
} debugfs;
#endif
unsigned int color;
/*
* key config, must be last because it contains key
* material as variable length member
......
......@@ -2141,6 +2141,7 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
* next fragment has a sequential PN value.
*/
entry->check_sequential_pn = true;
entry->key_color = rx->key->color;
memcpy(entry->last_pn,
rx->key->u.ccmp.rx_pn[queue],
IEEE80211_CCMP_PN_LEN);
......@@ -2178,6 +2179,11 @@ ieee80211_rx_h_defragment(struct ieee80211_rx_data *rx)
if (!requires_sequential_pn(rx, fc))
return RX_DROP_UNUSABLE;
/* Prevent mixed key and fragment cache attacks */
if (entry->key_color != rx->key->color)
return RX_DROP_UNUSABLE;
memcpy(pn, entry->last_pn, IEEE80211_CCMP_PN_LEN);
for (i = IEEE80211_CCMP_PN_LEN - 1; i >= 0; i--) {
pn[i]++;
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment