Research in Python (Federdeck endpoints)
This guide shows a practical workflow for pulling Federdeck response data into Python for analysis. It focuses on fetching, pagination, and shaping into a person×item matrix.
What you need
- Federdeck instance base URL (example:
https://federdeck.com) - Deck AT-URI or Item AT-URI
- Python packages:
requests,pandas
Endpoints used
/xrpc/com.federdeck.getResponsesByDeck?deck=AT_URI&limit=200&cursor=.../xrpc/com.federdeck.getResponsesByItem?item=AT_URI&limit=200&cursor=...
1) Install packages
pip install requests pandas
2) Fetch all pages (cursor pagination)
import requests
import pandas as pd
def ds_fetch_all(url: str, params: dict, max_pages: int = 10**9, verbose: bool = True) -> pd.DataFrame:
rows = []
cursor = None
page = 1
while True:
q = dict(params)
if cursor:
q["cursor"] = cursor
r = requests.get(url, params=q, timeout=60)
r.raise_for_status()
obj = r.json()
batch = obj.get("responses", []) or []
rows.extend(batch)
if verbose:
print(f"page={page} rows={len(batch)} total={len(rows)}")
cursor = obj.get("cursor")
page += 1
if not cursor:
break
if page > max_pages:
break
return pd.DataFrame(rows)
3) Example: fetch responses by deck
BASE = "https://federdeck.com"
DECK = "at://did:plc:.../com.federdeck.deck/3k...."
df = ds_fetch_all(
url=f"{BASE}/xrpc/com.federdeck.getResponsesByDeck",
params={"deck": DECK, "limit": 200},
verbose=True,
)
print(df.head())
print(df.columns)
4) Clean and choose attempt policy
# Keep only columns we need, normalize names
df2 = df.rename(columns={
"userDid": "user",
"itemId": "item",
"answeredAt": "answered_at",
"responseTime": "response_time",
})
# Convert types
df2["correct"] = df2["correct"].astype("int")
df2["answered_at"] = pd.to_datetime(df2["answered_at"], errors="coerce")
# Drop invalid rows
df2 = df2.dropna(subset=["user", "item", "correct", "answered_at"])
# Last-attempt policy per user×item
df_last = (df2.sort_values(["user", "item", "answered_at"])
.groupby(["user", "item"], as_index=False)
.tail(1))
5) Create person × item matrix (0/1/NA)
mat = df_last.pivot_table(
index="user",
columns="item",
values="correct",
aggfunc="first",
)
print(mat.shape)
print(mat.head())
6) Quick CTT checks
# p-values (proportion correct)
p = mat.mean(axis=0, skipna=True).sort_values()
# missingness
missing = mat.isna().mean(axis=0).sort_values(ascending=False)
ctt = pd.DataFrame({"p": p, "missing": missing})
print(ctt.head(20))
Next steps (IRT)
For IRT in Python you can export mat and use your preferred stack (e.g., py-irt, PyMC, pyro).
The main value here is consistent fetching + shaping into a matrix.